SlideShare a Scribd company logo
1 of 26
Download to read offline
Decoding of the BCH Codes
Raju Hazari
Department of Computer Science and Engineering
National Institute of Technology Calicut
March 30, 2023
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 1 / 26
Syndrome Calculation
Suppose that a code word v(x) = v0 + v1x + v2x2 + · · · + vn−1xn−1
is transmitted and the transmission errors result in the following
received vector :
r(x) = r0 + r1x + r2x2 + · · · + rn−1xn−1.
Let e(x) be the error pattern. Then
r(x) = v(x) + e(x). (1)
The first step of decoding a code is to compute the syndrome from
the received vector r(x).
For decoding a t-error correcting primitive BCH code, the
syndrome is a 2t-tuple,
S = (S1, S2, · · · , S2t) = r.HT
, (2)
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 2 / 26
Syndrome Calculation
We find that the ith
component of the syndrome is
Si = r(αi
)
= r0 + r1αi
+ r2α2i
+ · · · + rn−1α(n−1)i
(3)
for 1 ≤ i ≤ 2t.
Note that the syndrome components are elements in the field GF(2m
).
These components can be computed from r(x) as follows.
Dividing r(x) by the minimal polynomial φi(x) of αi
, we obtain
r(x) = ai(x)φi(x) + bi(x),
where bi(x) is the remainder with degree less than that of φi(x).
Since φi(αi
) = 0, we have
Si = r(αi
) = bi(αi
). (4)
Thus, the syndrome component Si is obtained by evaluating bi(x) with
x = αi
.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 3 / 26
Syndrome Calculation (Example)
Consider the double-error correcting (15, 7) BCH code. Suppose
that the vector
r = (1 0 0 0 0 0 0 0 1 0 0 0 0 0 0)
is received.
The corresponding polynomial is r(x) = 1 + x8
The syndrome consists of four components,
S = (S1, S2, S3, S4)
The minimal polynomials for α, α2 and α4 are identical and
φ1(x) = φ2(x) = φ4(x) = 1 + x + x4.
The minimal polynomial of α3 is
φ3(x) = 1 + x + x2 + x3 + x4.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 4 / 26
Syndrome Calculation (Example)
Dividing r(x) = 1 + x8 by φ1(x) = 1 + x + x4, the remainder is
b1(x) = x2
Dividing r(x) = 1 + x8 by φ3(x) = 1 + x + x2 + x3 + x4, the
remainder is
b3(x) = 1 + x3.
Substituting α, α2, and α4 into b1(x), we obtain
S1 = α2, S2 = α4, S4 = α8.
Substituting α3 into b3(x), we obtain
S3 = 1 + α9 = 1 + α + α3 = α7.
Thus,
S = (α2, α4, α7, α8)
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 5 / 26
Decoding Algorithm for the BCH Codes
Since α, α2, · · · , α2t are roots of each code polynomial, v(αi) = 0
for 1 ≤ i ≤ 2t.
From (1) and (3), we obtain the following relationship between the
syndrome components and the error pattern :
Si = e(αi) (5)
for 1 ≤ i ≤ 2t.
From (5) we see that the syndrome S depends on the error pattern
e only.
Suppose that the error pattern e(x) has ν errors at locations
xj1 , xj2 , · · · , xjν , that is,
e(x) = xj1 + xj2 + · · · + xjν , (6)
where 0 ≤ j1 < j2 < · · · jν < n.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 6 / 26
Decoding Algorithm for the BCH Codes
From (5) and (6), we obtain the following set of equations :
S1 = αj1 + αj2 + · · · + αjν
S2 = (αj1 )2 + (αj2 )2 + · · · + (αjν )2
S3 = (αj1 )3 + (αj2 )3 + · · · + (αjν )3
.
.
. (7)
S2t = (αj1 )2t + (αj2 )2t + · · · + (αjν )2t,
where αj1 , αj2 , · · · , αjν are unknown.
Any method for solving these equations is a decoding algorithm for
the BCH codes.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 7 / 26
Decoding Algorithm for the BCH Codes
Once αj1 , αj2 , · · · , αjν have been found, the powers j1, j2, · · · , jν tell
us the error locations in e(x).
In general, the equations of (7) have many possible solutions (2k of
them).
Each solution yields a different error pattern.
If the number of errors in the actual error pattern e(x) is t or less,
the solution that yields an error pattern with the smallest number
of errors is the right solution.
That is, the error pattern corresponding to this solution is the
most probable error pattern e(x) caused by the channel noise.
For large t, solving the equations of (7) directly is difficult and
ineffective.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 8 / 26
Decoding Algorithm for the BCH Codes
Following is an effective procedure to determine αjl for
l = 1, 2, · · · , ν from the syndrome components Si’s.
Let βl = αjl for 1 ≤ l ≤ ν. We call these elements the error
location numbers since they tell us the locations of the errors.
Now the equations of (7) can be expressed in the following form :
S1 = β1 + β2 + · · · + βν
S2 = β2
1 + β2
2 + · · · + β2
ν
.
.
. (8)
S2t = β2t
1 + β2t
2 + · · · + β2t
ν
These 2t equations are symmetric functions in β1, β2, · · · , βν,
which are known as power-sum symmetric functions.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 9 / 26
Decoding Algorithm for the BCH Codes
Now, we define the following polynomial :
σ(x) = (1 + β1x)(1 + β2x) · · · (1 + βνx)
= σ0 + σ1x + σ2x2 + · · · + σνxν (9)
The roots of σ(x) are β−1
1 , β−1
2 , · · · , β−1
ν , which are the inverse of
the error location numbers. For this reason, σ(x) is called the
error-location polynomial.
The coefficients of σ(x) and error-location numbers are related by
the following equations :
σ0 = 1
σ1 = β1 + β2 + · · · + βν
σ2 = β1β2 + β2β3 + · · · + βν−1βν
.
.
. (10)
σν = β1β2 · · · βν.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 10 / 26
Decoding Algorithm for the BCH Codes
The σi’s are known as elementary symmetric functions of βl’s.
From (8) and (10), we see that the σi’s are related to the
syndrome components Sj’s.
They are related to the syndrome components by the following
Newton’s identities :
S1 + σ1 = 0
S2 + σ1S1 + 2σ2 = 0
S3 + σ1S2 + σ2S1 + 3σ3 = 0
.
.
. (11)
Sν + σ1Sν−1 + · · · + σν−1S1 + νσν = 0
Sν+1 + σ1Sν + · · · + σν−1S2 + σνS1 = 0
.
.
.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 11 / 26
Decoding Algorithm for the BCH Codes
If it is possible to determine the elementary symmetric functions
σ1, σ2, · · · , σν from the equations of (11), the error location
numbers β1, β2, · · · , βν can be found by determining the roots of
the error-location polynomial σ(x).
The equations of (11) may have many solutions; however, we want
to find the solution that yields a σ(x) of minimal degree.
This σ(x) will produce an error pattern with a minimum number
of errors. If ν ≤ t, this σ(x) will give the actual error pattern e(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 12 / 26
Outline of the error-correcting procedure for BCH codes
The procedure consists of three major steps :
I Compute the syndrome S = (S1, S2, · · · , S2t) from the received
polynomial r(x).
I Determine the error-location polynomial σ(x) from the syndrome
components S1, S2, · · · , S2t.
I Determine the error-location numbers β1, β2, · · · , βν by finding the
roots of σ(x), and correct the errors in r(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 13 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
The first step of iteration is to find a minimum-degree polynomial
σ(1)(x) whose coefficients satisfy the first Newton’s identity of
(11).
The next step is to test whether coefficients of σ(1)(x) also satisfy
the second Newton’s identity of (11).
If the coefficients of σ(1)(x) do satisfy the second Newton’s identity
of (11), we set
σ(2)(x) = σ(1)(x)
If the coefficients of σ(1)(x) do not satisfy the second Newton’s
identity of (11), a correction term is added to σ(1)(x) to form
σ(2)(x) such that σ(2)(x) has minimum degree and its coefficients
satisfy the first two Newton’s identities of (11).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 14 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Therefore, at the end of the second step of iteration, we obtain a
minimum-degree polynomial σ(2)(x) whose coefficients satisfy the
first two Newton’s identities of (11).
The third step of iteration is to find a minimum-degree polynomial
σ(3)(x) from σ(2)(x) such that the coefficients of σ(3)(x) satisfy the
first three Newton’s identities of (11).
We test whether the coefficients of σ(2)(x) satisfy the third
Newton’s identity of (11). If they do, we set σ(3)(x) = σ(2)(x).
If they do not, a correction term is added to σ(2)(x) to form
σ(3)(x).
Iteration continues until σ(2t)(x) is obtained.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 15 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Then σ(2t)(x) is taken to be the error-location polynomial σ(x),
that is,
σ(x) = σ(2t)(x)
This σ(x) will yield an error pattern e(x) of minimum weight that
satisfies the equations of (7).
If the number of errors in the received polynomial r(x) is t or less,
then σ(x) produces the true error pattern.
Let,
σ(µ)(x) = 1 + σ
(µ)
1 x + σ
(µ)
2 x2 + · · · + σ
(µ)
lµ
xlµ (12)
be the minimum degree polynomial determined at the µth step of
iteration whose coefficients satisfy the first µ Newton’s identities of
(11).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 16 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To determine σ(µ+1)(x), we compute the following quantity :
dµ = Sµ+1 + σ
(µ)
1 Sµ + σ
(µ)
2 Sµ−1 + · · · + σ
(µ)
lµ
Sµ+1−lµ (13)
This quantity dµ is called the µth discrepancy.
If dµ = 0, the coefficients of σ(µ)(x) satisfy the (µ + 1)th Newton’s
identity. We set,
σ(µ+1)(x) = σ(µ)(x)
If dµ 6= 0, the coefficients of σ(µ)(x) do not satisfy the (µ + 1)th
Newton’s identity and a correction term must be added to σ(µ)(x)
to obtain σ(µ+1)(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 17 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To accomplish this correction, we go back to the steps prior to the
µth step and determine a polynomial σ(p)(x) such that the pth
discrepancy dp 6= 0 and p − lp [lp is the degree of σ(p)(x)] has the
largest value.
Then
σ(µ+1)(x) = σ(µ)(x) + dµd−1
p x(µ−p)σ(p)(x), (14)
which is the minimum degree polynomial whose coefficients satisfy
the first µ + 1 Newton’s identities.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 18 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To carry out the iteration of finding σ(x), we fill up the following
table, where lµ is the degree of σ(µ)(x).
µ σ(µ)(x) dµ lµ µ − lµ
-1 1 1 0 -1
0 1 S1 0 0
1
2
.
.
.
2t
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 19 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Assuming that we have filled out all rows upto and including the
µth row, we fill out the (µ + 1)th row as follows :
1 If dµ = 0, then σ(µ+1)
(x) = σ(µ)
(x) and lµ+1 = lµ.
2 If dµ 6= 0, find another row p prior to the µth
row such that dp 6= 0
and the number p − lp in the last column of the table has the
largest value. Then σ(µ+1)
(x) is given by (14) and
lµ+1 =max(lµ, lp + µ − p) (15)
In either case,
dµ+1 = Sµ+2 + σ
(µ+1)
1 Sµ+1 + · · · + σ
(µ+1)
lµ+1
Sµ+2−lµ+1 , (16)
where the σ
(µ+1)
l ’s are the coefficients of σ(µ+1)(x).
The polynomial σ(2t)(x) in the last row should be the required
σ(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 20 / 26
Example
Consider the (15, 5) triple-error correcting BCH code. Assume
that the code vector of all zeros,
v = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
is transmitted and the received vector is
r = (0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0)
Then r(x) = x3 + x5 + x12.
The minimal polynomials for α, α2 and α4 are identical and
φ1(x) = φ2(x) = φ4(x) = 1 + x + x4.
The elements α3 and α6 have the same minimal polynomial,
φ3(x) = φ6(x) = 1 + x + x2 + x3 + x4.
The minimal polynomial for α5 is
φ5(x) = 1 + x + x2.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 21 / 26
Example
Dividing r(x) by φ1(x), φ3(x) and φ5(x), respectively, we obtain
the following remainders :
b1(x) = 1,
b3(x) = 1 + x2 + x3,
b5(x) = x2.
Substituting α, α2 and α4 into b1(x), we obtain the following
syndrome components :
S1 = S2 = S4 = 1.
Substituting α3 and α6 into b3(x), we obtain
S3 = 1 + α6 + α9 = α10,
S6 = 1 + α12 + α18 = α5.
Substituting α5 into b5(x), we have
S5 = α10.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 22 / 26
Example
Using the iterative procedure we obtain the below table. Thus, the
error location polynomial is
σ(x) = σ(6)(x) = 1 + x + α5x3.
µ σ(µ)(x) dµ lµ µ − lµ
-1 1 1 0 -1
0 1 1 0 0
1 1 + x 0 1 0 (take p = −1)
2 1 + x α5 1 1
3 1 + x + α5x2 0 2 1 (take p = 0)
4 1 + x + α5x2 α10 2 2
5 1 + x + α5x3 0 3 2 (take p = 2)
6 1 + x + α5x3 - - -
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 23 / 26
Example
We can easily check that α3, α10 and α12 are the roots of σ(x).
Their inverse are α12, α5, and α3 which are the error location
numbers. Therefore, the error pattern is
e(x) = x3 + x5 + x12.
Adding e(x) to the received polynomial r(x), we obtain the
all-zero code vector.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 24 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
If the number of errors in the received polynomial r(x) is less than
the designed error correcting capability t of the code, it is not
necessary to carry out the 2t steps of iteration to find the
error-location polynomial σ(x).
Let σ(µ)(x) and dµ be the solution and discrepancy obtained at the
µth step of iteration.
Let lµ be the degree of σ(µ)(x). Now, if dµ and the discrepancies at
the next t − lµ − 1 steps are all zero, σ(µ)(x) is the error location
polynomial.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 25 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
If the number of errors in the received polynomial r(x) is ν(ν ≤ t),
only t + ν steps of iteration is needed to determine the error
location polynomial σ(x).
If ν is is small, the reduction in the number of iteration steps
results in an increase of decoding speed.
The iterative algorithm for finding σ(x) is not only applies to
binary BCH codes but also to nonbinary BCH codes.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 26 / 26

More Related Content

What's hot

Filtering an image is to apply a convolution
Filtering an image is to apply a convolutionFiltering an image is to apply a convolution
Filtering an image is to apply a convolutionAbhishek Mukherjee
 
Fpga 11-sequence-detector-fir-iir-filter
Fpga 11-sequence-detector-fir-iir-filterFpga 11-sequence-detector-fir-iir-filter
Fpga 11-sequence-detector-fir-iir-filterMalik Tauqir Hasan
 
An Overview of High Efficiency Video Codec HEVC (H.265)
An Overview of High Efficiency Video Codec HEVC (H.265)An Overview of High Efficiency Video Codec HEVC (H.265)
An Overview of High Efficiency Video Codec HEVC (H.265)Varun Ravi
 
Image Processing: Spatial filters
Image Processing: Spatial filtersImage Processing: Spatial filters
Image Processing: Spatial filtersA B Shinde
 
Image Filtering in the Frequency Domain
Image Filtering in the Frequency DomainImage Filtering in the Frequency Domain
Image Filtering in the Frequency DomainAmnaakhaan
 
The Digital Image Processing Q@A
The Digital Image Processing Q@AThe Digital Image Processing Q@A
The Digital Image Processing Q@AChung Hua Universit
 
8086 MICROPROCESSOR
8086 MICROPROCESSOR8086 MICROPROCESSOR
8086 MICROPROCESSORAkhila Rahul
 
Predictive coding
Predictive codingPredictive coding
Predictive codingp_ayal
 
Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Dr. Mohieddin Moradi
 
Digital signal processing through speech, hearing, and Python
Digital signal processing through speech, hearing, and PythonDigital signal processing through speech, hearing, and Python
Digital signal processing through speech, hearing, and PythonMel Chua
 
Design of Synthesizable Asynchronous FIFO And Implementation on FPGA
Design of Synthesizable Asynchronous FIFO And Implementation on FPGADesign of Synthesizable Asynchronous FIFO And Implementation on FPGA
Design of Synthesizable Asynchronous FIFO And Implementation on FPGAIJERDJOURNAL
 

What's hot (20)

Filtering an image is to apply a convolution
Filtering an image is to apply a convolutionFiltering an image is to apply a convolution
Filtering an image is to apply a convolution
 
Source coding
Source coding Source coding
Source coding
 
Fpga 11-sequence-detector-fir-iir-filter
Fpga 11-sequence-detector-fir-iir-filterFpga 11-sequence-detector-fir-iir-filter
Fpga 11-sequence-detector-fir-iir-filter
 
Cpld fpga
Cpld fpgaCpld fpga
Cpld fpga
 
An Overview of High Efficiency Video Codec HEVC (H.265)
An Overview of High Efficiency Video Codec HEVC (H.265)An Overview of High Efficiency Video Codec HEVC (H.265)
An Overview of High Efficiency Video Codec HEVC (H.265)
 
Booth Multiplier
Booth MultiplierBooth Multiplier
Booth Multiplier
 
Image Processing: Spatial filters
Image Processing: Spatial filtersImage Processing: Spatial filters
Image Processing: Spatial filters
 
Image Filtering in the Frequency Domain
Image Filtering in the Frequency DomainImage Filtering in the Frequency Domain
Image Filtering in the Frequency Domain
 
The Digital Image Processing Q@A
The Digital Image Processing Q@AThe Digital Image Processing Q@A
The Digital Image Processing Q@A
 
8086 MICROPROCESSOR
8086 MICROPROCESSOR8086 MICROPROCESSOR
8086 MICROPROCESSOR
 
Multipliers in VLSI
Multipliers in VLSIMultipliers in VLSI
Multipliers in VLSI
 
Analog Video
Analog Video Analog Video
Analog Video
 
Predictive coding
Predictive codingPredictive coding
Predictive coding
 
Difference between wavelet transform and fourier transform
Difference between wavelet transform and fourier transformDifference between wavelet transform and fourier transform
Difference between wavelet transform and fourier transform
 
Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts
 
08 decoder
08 decoder08 decoder
08 decoder
 
Digital signal processing through speech, hearing, and Python
Digital signal processing through speech, hearing, and PythonDigital signal processing through speech, hearing, and Python
Digital signal processing through speech, hearing, and Python
 
Huffman Coding
Huffman CodingHuffman Coding
Huffman Coding
 
Ripple Carry Adder
Ripple Carry AdderRipple Carry Adder
Ripple Carry Adder
 
Design of Synthesizable Asynchronous FIFO And Implementation on FPGA
Design of Synthesizable Asynchronous FIFO And Implementation on FPGADesign of Synthesizable Asynchronous FIFO And Implementation on FPGA
Design of Synthesizable Asynchronous FIFO And Implementation on FPGA
 

Similar to Decoding BCH-Code.pdf

1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdfd00a7ece
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component AnalysisSumit Singh
 
ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)CrackDSE
 
On approximate bounds of zeros of polynomials within
On approximate bounds of zeros of polynomials withinOn approximate bounds of zeros of polynomials within
On approximate bounds of zeros of polynomials withineSAT Publishing House
 
Notions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systemsNotions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systemsStavros Vologiannidis
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfAlexander Litvinenko
 
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer keyNbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer keyMD Kutubuddin Sardar
 
machinelearning project
machinelearning projectmachinelearning project
machinelearning projectLianli Liu
 
Paper computer
Paper computerPaper computer
Paper computerbikram ...
 
Paper computer
Paper computerPaper computer
Paper computerbikram ...
 
Fixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractionsFixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractionsAlexander Decker
 
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.Igor Moiseev
 
ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)CrackDSE
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...BRNSS Publication Hub
 
Rosser's theorem
Rosser's theoremRosser's theorem
Rosser's theoremWathna
 

Similar to Decoding BCH-Code.pdf (20)

Multivariate Methods Assignment Help
Multivariate Methods Assignment HelpMultivariate Methods Assignment Help
Multivariate Methods Assignment Help
 
1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
 
ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)
 
Ch02 6
Ch02 6Ch02 6
Ch02 6
 
On approximate bounds of zeros of polynomials within
On approximate bounds of zeros of polynomials withinOn approximate bounds of zeros of polynomials within
On approximate bounds of zeros of polynomials within
 
4th Semester CS / IS (2013-June) Question Papers
4th Semester CS / IS (2013-June) Question Papers 4th Semester CS / IS (2013-June) Question Papers
4th Semester CS / IS (2013-June) Question Papers
 
05_AJMS_332_21.pdf
05_AJMS_332_21.pdf05_AJMS_332_21.pdf
05_AJMS_332_21.pdf
 
Notions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systemsNotions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systems
 
maths 12th.pdf
maths 12th.pdfmaths 12th.pdf
maths 12th.pdf
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer keyNbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
 
machinelearning project
machinelearning projectmachinelearning project
machinelearning project
 
Paper computer
Paper computerPaper computer
Paper computer
 
Paper computer
Paper computerPaper computer
Paper computer
 
Fixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractionsFixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractions
 
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
 
ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
 
Rosser's theorem
Rosser's theoremRosser's theorem
Rosser's theorem
 

Recently uploaded

Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxMuhammadAsimMuhammad6
 
Basic Electronics for diploma students as per technical education Kerala Syll...
Basic Electronics for diploma students as per technical education Kerala Syll...Basic Electronics for diploma students as per technical education Kerala Syll...
Basic Electronics for diploma students as per technical education Kerala Syll...ppkakm
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...vershagrag
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Ramkumar k
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationBhangaleSonal
 
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...ronahami
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdfAldoGarca30
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxkalpana413121
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...drmkjayanthikannan
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.Kamal Acharya
 
Ghuma $ Russian Call Girls Ahmedabad ₹7.5k Pick Up & Drop With Cash Payment 8...
Ghuma $ Russian Call Girls Ahmedabad ₹7.5k Pick Up & Drop With Cash Payment 8...Ghuma $ Russian Call Girls Ahmedabad ₹7.5k Pick Up & Drop With Cash Payment 8...
Ghuma $ Russian Call Girls Ahmedabad ₹7.5k Pick Up & Drop With Cash Payment 8...gragchanchal546
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Call Girls Mumbai
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxSCMS School of Architecture
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityMorshed Ahmed Rahath
 
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptxS1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptxSCMS School of Architecture
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Servicemeghakumariji156
 

Recently uploaded (20)

Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
 
Basic Electronics for diploma students as per technical education Kerala Syll...
Basic Electronics for diploma students as per technical education Kerala Syll...Basic Electronics for diploma students as per technical education Kerala Syll...
Basic Electronics for diploma students as per technical education Kerala Syll...
 
Signal Processing and Linear System Analysis
Signal Processing and Linear System AnalysisSignal Processing and Linear System Analysis
Signal Processing and Linear System Analysis
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptx
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
Ghuma $ Russian Call Girls Ahmedabad ₹7.5k Pick Up & Drop With Cash Payment 8...
Ghuma $ Russian Call Girls Ahmedabad ₹7.5k Pick Up & Drop With Cash Payment 8...Ghuma $ Russian Call Girls Ahmedabad ₹7.5k Pick Up & Drop With Cash Payment 8...
Ghuma $ Russian Call Girls Ahmedabad ₹7.5k Pick Up & Drop With Cash Payment 8...
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
 
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptxS1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
 

Decoding BCH-Code.pdf

  • 1. Decoding of the BCH Codes Raju Hazari Department of Computer Science and Engineering National Institute of Technology Calicut March 30, 2023 Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 1 / 26
  • 2. Syndrome Calculation Suppose that a code word v(x) = v0 + v1x + v2x2 + · · · + vn−1xn−1 is transmitted and the transmission errors result in the following received vector : r(x) = r0 + r1x + r2x2 + · · · + rn−1xn−1. Let e(x) be the error pattern. Then r(x) = v(x) + e(x). (1) The first step of decoding a code is to compute the syndrome from the received vector r(x). For decoding a t-error correcting primitive BCH code, the syndrome is a 2t-tuple, S = (S1, S2, · · · , S2t) = r.HT , (2) Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 2 / 26
  • 3. Syndrome Calculation We find that the ith component of the syndrome is Si = r(αi ) = r0 + r1αi + r2α2i + · · · + rn−1α(n−1)i (3) for 1 ≤ i ≤ 2t. Note that the syndrome components are elements in the field GF(2m ). These components can be computed from r(x) as follows. Dividing r(x) by the minimal polynomial φi(x) of αi , we obtain r(x) = ai(x)φi(x) + bi(x), where bi(x) is the remainder with degree less than that of φi(x). Since φi(αi ) = 0, we have Si = r(αi ) = bi(αi ). (4) Thus, the syndrome component Si is obtained by evaluating bi(x) with x = αi . Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 3 / 26
  • 4. Syndrome Calculation (Example) Consider the double-error correcting (15, 7) BCH code. Suppose that the vector r = (1 0 0 0 0 0 0 0 1 0 0 0 0 0 0) is received. The corresponding polynomial is r(x) = 1 + x8 The syndrome consists of four components, S = (S1, S2, S3, S4) The minimal polynomials for α, α2 and α4 are identical and φ1(x) = φ2(x) = φ4(x) = 1 + x + x4. The minimal polynomial of α3 is φ3(x) = 1 + x + x2 + x3 + x4. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 4 / 26
  • 5. Syndrome Calculation (Example) Dividing r(x) = 1 + x8 by φ1(x) = 1 + x + x4, the remainder is b1(x) = x2 Dividing r(x) = 1 + x8 by φ3(x) = 1 + x + x2 + x3 + x4, the remainder is b3(x) = 1 + x3. Substituting α, α2, and α4 into b1(x), we obtain S1 = α2, S2 = α4, S4 = α8. Substituting α3 into b3(x), we obtain S3 = 1 + α9 = 1 + α + α3 = α7. Thus, S = (α2, α4, α7, α8) Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 5 / 26
  • 6. Decoding Algorithm for the BCH Codes Since α, α2, · · · , α2t are roots of each code polynomial, v(αi) = 0 for 1 ≤ i ≤ 2t. From (1) and (3), we obtain the following relationship between the syndrome components and the error pattern : Si = e(αi) (5) for 1 ≤ i ≤ 2t. From (5) we see that the syndrome S depends on the error pattern e only. Suppose that the error pattern e(x) has ν errors at locations xj1 , xj2 , · · · , xjν , that is, e(x) = xj1 + xj2 + · · · + xjν , (6) where 0 ≤ j1 < j2 < · · · jν < n. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 6 / 26
  • 7. Decoding Algorithm for the BCH Codes From (5) and (6), we obtain the following set of equations : S1 = αj1 + αj2 + · · · + αjν S2 = (αj1 )2 + (αj2 )2 + · · · + (αjν )2 S3 = (αj1 )3 + (αj2 )3 + · · · + (αjν )3 . . . (7) S2t = (αj1 )2t + (αj2 )2t + · · · + (αjν )2t, where αj1 , αj2 , · · · , αjν are unknown. Any method for solving these equations is a decoding algorithm for the BCH codes. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 7 / 26
  • 8. Decoding Algorithm for the BCH Codes Once αj1 , αj2 , · · · , αjν have been found, the powers j1, j2, · · · , jν tell us the error locations in e(x). In general, the equations of (7) have many possible solutions (2k of them). Each solution yields a different error pattern. If the number of errors in the actual error pattern e(x) is t or less, the solution that yields an error pattern with the smallest number of errors is the right solution. That is, the error pattern corresponding to this solution is the most probable error pattern e(x) caused by the channel noise. For large t, solving the equations of (7) directly is difficult and ineffective. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 8 / 26
  • 9. Decoding Algorithm for the BCH Codes Following is an effective procedure to determine αjl for l = 1, 2, · · · , ν from the syndrome components Si’s. Let βl = αjl for 1 ≤ l ≤ ν. We call these elements the error location numbers since they tell us the locations of the errors. Now the equations of (7) can be expressed in the following form : S1 = β1 + β2 + · · · + βν S2 = β2 1 + β2 2 + · · · + β2 ν . . . (8) S2t = β2t 1 + β2t 2 + · · · + β2t ν These 2t equations are symmetric functions in β1, β2, · · · , βν, which are known as power-sum symmetric functions. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 9 / 26
  • 10. Decoding Algorithm for the BCH Codes Now, we define the following polynomial : σ(x) = (1 + β1x)(1 + β2x) · · · (1 + βνx) = σ0 + σ1x + σ2x2 + · · · + σνxν (9) The roots of σ(x) are β−1 1 , β−1 2 , · · · , β−1 ν , which are the inverse of the error location numbers. For this reason, σ(x) is called the error-location polynomial. The coefficients of σ(x) and error-location numbers are related by the following equations : σ0 = 1 σ1 = β1 + β2 + · · · + βν σ2 = β1β2 + β2β3 + · · · + βν−1βν . . . (10) σν = β1β2 · · · βν. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 10 / 26
  • 11. Decoding Algorithm for the BCH Codes The σi’s are known as elementary symmetric functions of βl’s. From (8) and (10), we see that the σi’s are related to the syndrome components Sj’s. They are related to the syndrome components by the following Newton’s identities : S1 + σ1 = 0 S2 + σ1S1 + 2σ2 = 0 S3 + σ1S2 + σ2S1 + 3σ3 = 0 . . . (11) Sν + σ1Sν−1 + · · · + σν−1S1 + νσν = 0 Sν+1 + σ1Sν + · · · + σν−1S2 + σνS1 = 0 . . . Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 11 / 26
  • 12. Decoding Algorithm for the BCH Codes If it is possible to determine the elementary symmetric functions σ1, σ2, · · · , σν from the equations of (11), the error location numbers β1, β2, · · · , βν can be found by determining the roots of the error-location polynomial σ(x). The equations of (11) may have many solutions; however, we want to find the solution that yields a σ(x) of minimal degree. This σ(x) will produce an error pattern with a minimum number of errors. If ν ≤ t, this σ(x) will give the actual error pattern e(x). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 12 / 26
  • 13. Outline of the error-correcting procedure for BCH codes The procedure consists of three major steps : I Compute the syndrome S = (S1, S2, · · · , S2t) from the received polynomial r(x). I Determine the error-location polynomial σ(x) from the syndrome components S1, S2, · · · , S2t. I Determine the error-location numbers β1, β2, · · · , βν by finding the roots of σ(x), and correct the errors in r(x). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 13 / 26
  • 14. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] The first step of iteration is to find a minimum-degree polynomial σ(1)(x) whose coefficients satisfy the first Newton’s identity of (11). The next step is to test whether coefficients of σ(1)(x) also satisfy the second Newton’s identity of (11). If the coefficients of σ(1)(x) do satisfy the second Newton’s identity of (11), we set σ(2)(x) = σ(1)(x) If the coefficients of σ(1)(x) do not satisfy the second Newton’s identity of (11), a correction term is added to σ(1)(x) to form σ(2)(x) such that σ(2)(x) has minimum degree and its coefficients satisfy the first two Newton’s identities of (11). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 14 / 26
  • 15. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] Therefore, at the end of the second step of iteration, we obtain a minimum-degree polynomial σ(2)(x) whose coefficients satisfy the first two Newton’s identities of (11). The third step of iteration is to find a minimum-degree polynomial σ(3)(x) from σ(2)(x) such that the coefficients of σ(3)(x) satisfy the first three Newton’s identities of (11). We test whether the coefficients of σ(2)(x) satisfy the third Newton’s identity of (11). If they do, we set σ(3)(x) = σ(2)(x). If they do not, a correction term is added to σ(2)(x) to form σ(3)(x). Iteration continues until σ(2t)(x) is obtained. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 15 / 26
  • 16. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] Then σ(2t)(x) is taken to be the error-location polynomial σ(x), that is, σ(x) = σ(2t)(x) This σ(x) will yield an error pattern e(x) of minimum weight that satisfies the equations of (7). If the number of errors in the received polynomial r(x) is t or less, then σ(x) produces the true error pattern. Let, σ(µ)(x) = 1 + σ (µ) 1 x + σ (µ) 2 x2 + · · · + σ (µ) lµ xlµ (12) be the minimum degree polynomial determined at the µth step of iteration whose coefficients satisfy the first µ Newton’s identities of (11). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 16 / 26
  • 17. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] To determine σ(µ+1)(x), we compute the following quantity : dµ = Sµ+1 + σ (µ) 1 Sµ + σ (µ) 2 Sµ−1 + · · · + σ (µ) lµ Sµ+1−lµ (13) This quantity dµ is called the µth discrepancy. If dµ = 0, the coefficients of σ(µ)(x) satisfy the (µ + 1)th Newton’s identity. We set, σ(µ+1)(x) = σ(µ)(x) If dµ 6= 0, the coefficients of σ(µ)(x) do not satisfy the (µ + 1)th Newton’s identity and a correction term must be added to σ(µ)(x) to obtain σ(µ+1)(x). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 17 / 26
  • 18. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] To accomplish this correction, we go back to the steps prior to the µth step and determine a polynomial σ(p)(x) such that the pth discrepancy dp 6= 0 and p − lp [lp is the degree of σ(p)(x)] has the largest value. Then σ(µ+1)(x) = σ(µ)(x) + dµd−1 p x(µ−p)σ(p)(x), (14) which is the minimum degree polynomial whose coefficients satisfy the first µ + 1 Newton’s identities. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 18 / 26
  • 19. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] To carry out the iteration of finding σ(x), we fill up the following table, where lµ is the degree of σ(µ)(x). µ σ(µ)(x) dµ lµ µ − lµ -1 1 1 0 -1 0 1 S1 0 0 1 2 . . . 2t Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 19 / 26
  • 20. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] Assuming that we have filled out all rows upto and including the µth row, we fill out the (µ + 1)th row as follows : 1 If dµ = 0, then σ(µ+1) (x) = σ(µ) (x) and lµ+1 = lµ. 2 If dµ 6= 0, find another row p prior to the µth row such that dp 6= 0 and the number p − lp in the last column of the table has the largest value. Then σ(µ+1) (x) is given by (14) and lµ+1 =max(lµ, lp + µ − p) (15) In either case, dµ+1 = Sµ+2 + σ (µ+1) 1 Sµ+1 + · · · + σ (µ+1) lµ+1 Sµ+2−lµ+1 , (16) where the σ (µ+1) l ’s are the coefficients of σ(µ+1)(x). The polynomial σ(2t)(x) in the last row should be the required σ(x). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 20 / 26
  • 21. Example Consider the (15, 5) triple-error correcting BCH code. Assume that the code vector of all zeros, v = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) is transmitted and the received vector is r = (0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0) Then r(x) = x3 + x5 + x12. The minimal polynomials for α, α2 and α4 are identical and φ1(x) = φ2(x) = φ4(x) = 1 + x + x4. The elements α3 and α6 have the same minimal polynomial, φ3(x) = φ6(x) = 1 + x + x2 + x3 + x4. The minimal polynomial for α5 is φ5(x) = 1 + x + x2. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 21 / 26
  • 22. Example Dividing r(x) by φ1(x), φ3(x) and φ5(x), respectively, we obtain the following remainders : b1(x) = 1, b3(x) = 1 + x2 + x3, b5(x) = x2. Substituting α, α2 and α4 into b1(x), we obtain the following syndrome components : S1 = S2 = S4 = 1. Substituting α3 and α6 into b3(x), we obtain S3 = 1 + α6 + α9 = α10, S6 = 1 + α12 + α18 = α5. Substituting α5 into b5(x), we have S5 = α10. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 22 / 26
  • 23. Example Using the iterative procedure we obtain the below table. Thus, the error location polynomial is σ(x) = σ(6)(x) = 1 + x + α5x3. µ σ(µ)(x) dµ lµ µ − lµ -1 1 1 0 -1 0 1 1 0 0 1 1 + x 0 1 0 (take p = −1) 2 1 + x α5 1 1 3 1 + x + α5x2 0 2 1 (take p = 0) 4 1 + x + α5x2 α10 2 2 5 1 + x + α5x3 0 3 2 (take p = 2) 6 1 + x + α5x3 - - - Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 23 / 26
  • 24. Example We can easily check that α3, α10 and α12 are the roots of σ(x). Their inverse are α12, α5, and α3 which are the error location numbers. Therefore, the error pattern is e(x) = x3 + x5 + x12. Adding e(x) to the received polynomial r(x), we obtain the all-zero code vector. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 24 / 26
  • 25. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] If the number of errors in the received polynomial r(x) is less than the designed error correcting capability t of the code, it is not necessary to carry out the 2t steps of iteration to find the error-location polynomial σ(x). Let σ(µ)(x) and dµ be the solution and discrepancy obtained at the µth step of iteration. Let lµ be the degree of σ(µ)(x). Now, if dµ and the discrepancies at the next t − lµ − 1 steps are all zero, σ(µ)(x) is the error location polynomial. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 25 / 26
  • 26. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] If the number of errors in the received polynomial r(x) is ν(ν ≤ t), only t + ν steps of iteration is needed to determine the error location polynomial σ(x). If ν is is small, the reduction in the number of iteration steps results in an increase of decoding speed. The iterative algorithm for finding σ(x) is not only applies to binary BCH codes but also to nonbinary BCH codes. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 26 / 26