SlideShare a Scribd company logo
Decoding of the BCH Codes
Raju Hazari
Department of Computer Science and Engineering
National Institute of Technology Calicut
March 30, 2023
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 1 / 26
Syndrome Calculation
Suppose that a code word v(x) = v0 + v1x + v2x2 + · · · + vn−1xn−1
is transmitted and the transmission errors result in the following
received vector :
r(x) = r0 + r1x + r2x2 + · · · + rn−1xn−1.
Let e(x) be the error pattern. Then
r(x) = v(x) + e(x). (1)
The first step of decoding a code is to compute the syndrome from
the received vector r(x).
For decoding a t-error correcting primitive BCH code, the
syndrome is a 2t-tuple,
S = (S1, S2, · · · , S2t) = r.HT
, (2)
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 2 / 26
Syndrome Calculation
We find that the ith
component of the syndrome is
Si = r(αi
)
= r0 + r1αi
+ r2α2i
+ · · · + rn−1α(n−1)i
(3)
for 1 ≤ i ≤ 2t.
Note that the syndrome components are elements in the field GF(2m
).
These components can be computed from r(x) as follows.
Dividing r(x) by the minimal polynomial φi(x) of αi
, we obtain
r(x) = ai(x)φi(x) + bi(x),
where bi(x) is the remainder with degree less than that of φi(x).
Since φi(αi
) = 0, we have
Si = r(αi
) = bi(αi
). (4)
Thus, the syndrome component Si is obtained by evaluating bi(x) with
x = αi
.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 3 / 26
Syndrome Calculation (Example)
Consider the double-error correcting (15, 7) BCH code. Suppose
that the vector
r = (1 0 0 0 0 0 0 0 1 0 0 0 0 0 0)
is received.
The corresponding polynomial is r(x) = 1 + x8
The syndrome consists of four components,
S = (S1, S2, S3, S4)
The minimal polynomials for α, α2 and α4 are identical and
φ1(x) = φ2(x) = φ4(x) = 1 + x + x4.
The minimal polynomial of α3 is
φ3(x) = 1 + x + x2 + x3 + x4.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 4 / 26
Syndrome Calculation (Example)
Dividing r(x) = 1 + x8 by φ1(x) = 1 + x + x4, the remainder is
b1(x) = x2
Dividing r(x) = 1 + x8 by φ3(x) = 1 + x + x2 + x3 + x4, the
remainder is
b3(x) = 1 + x3.
Substituting α, α2, and α4 into b1(x), we obtain
S1 = α2, S2 = α4, S4 = α8.
Substituting α3 into b3(x), we obtain
S3 = 1 + α9 = 1 + α + α3 = α7.
Thus,
S = (α2, α4, α7, α8)
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 5 / 26
Decoding Algorithm for the BCH Codes
Since α, α2, · · · , α2t are roots of each code polynomial, v(αi) = 0
for 1 ≤ i ≤ 2t.
From (1) and (3), we obtain the following relationship between the
syndrome components and the error pattern :
Si = e(αi) (5)
for 1 ≤ i ≤ 2t.
From (5) we see that the syndrome S depends on the error pattern
e only.
Suppose that the error pattern e(x) has ν errors at locations
xj1 , xj2 , · · · , xjν , that is,
e(x) = xj1 + xj2 + · · · + xjν , (6)
where 0 ≤ j1 < j2 < · · · jν < n.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 6 / 26
Decoding Algorithm for the BCH Codes
From (5) and (6), we obtain the following set of equations :
S1 = αj1 + αj2 + · · · + αjν
S2 = (αj1 )2 + (αj2 )2 + · · · + (αjν )2
S3 = (αj1 )3 + (αj2 )3 + · · · + (αjν )3
.
.
. (7)
S2t = (αj1 )2t + (αj2 )2t + · · · + (αjν )2t,
where αj1 , αj2 , · · · , αjν are unknown.
Any method for solving these equations is a decoding algorithm for
the BCH codes.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 7 / 26
Decoding Algorithm for the BCH Codes
Once αj1 , αj2 , · · · , αjν have been found, the powers j1, j2, · · · , jν tell
us the error locations in e(x).
In general, the equations of (7) have many possible solutions (2k of
them).
Each solution yields a different error pattern.
If the number of errors in the actual error pattern e(x) is t or less,
the solution that yields an error pattern with the smallest number
of errors is the right solution.
That is, the error pattern corresponding to this solution is the
most probable error pattern e(x) caused by the channel noise.
For large t, solving the equations of (7) directly is difficult and
ineffective.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 8 / 26
Decoding Algorithm for the BCH Codes
Following is an effective procedure to determine αjl for
l = 1, 2, · · · , ν from the syndrome components Si’s.
Let βl = αjl for 1 ≤ l ≤ ν. We call these elements the error
location numbers since they tell us the locations of the errors.
Now the equations of (7) can be expressed in the following form :
S1 = β1 + β2 + · · · + βν
S2 = β2
1 + β2
2 + · · · + β2
ν
.
.
. (8)
S2t = β2t
1 + β2t
2 + · · · + β2t
ν
These 2t equations are symmetric functions in β1, β2, · · · , βν,
which are known as power-sum symmetric functions.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 9 / 26
Decoding Algorithm for the BCH Codes
Now, we define the following polynomial :
σ(x) = (1 + β1x)(1 + β2x) · · · (1 + βνx)
= σ0 + σ1x + σ2x2 + · · · + σνxν (9)
The roots of σ(x) are β−1
1 , β−1
2 , · · · , β−1
ν , which are the inverse of
the error location numbers. For this reason, σ(x) is called the
error-location polynomial.
The coefficients of σ(x) and error-location numbers are related by
the following equations :
σ0 = 1
σ1 = β1 + β2 + · · · + βν
σ2 = β1β2 + β2β3 + · · · + βν−1βν
.
.
. (10)
σν = β1β2 · · · βν.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 10 / 26
Decoding Algorithm for the BCH Codes
The σi’s are known as elementary symmetric functions of βl’s.
From (8) and (10), we see that the σi’s are related to the
syndrome components Sj’s.
They are related to the syndrome components by the following
Newton’s identities :
S1 + σ1 = 0
S2 + σ1S1 + 2σ2 = 0
S3 + σ1S2 + σ2S1 + 3σ3 = 0
.
.
. (11)
Sν + σ1Sν−1 + · · · + σν−1S1 + νσν = 0
Sν+1 + σ1Sν + · · · + σν−1S2 + σνS1 = 0
.
.
.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 11 / 26
Decoding Algorithm for the BCH Codes
If it is possible to determine the elementary symmetric functions
σ1, σ2, · · · , σν from the equations of (11), the error location
numbers β1, β2, · · · , βν can be found by determining the roots of
the error-location polynomial σ(x).
The equations of (11) may have many solutions; however, we want
to find the solution that yields a σ(x) of minimal degree.
This σ(x) will produce an error pattern with a minimum number
of errors. If ν ≤ t, this σ(x) will give the actual error pattern e(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 12 / 26
Outline of the error-correcting procedure for BCH codes
The procedure consists of three major steps :
I Compute the syndrome S = (S1, S2, · · · , S2t) from the received
polynomial r(x).
I Determine the error-location polynomial σ(x) from the syndrome
components S1, S2, · · · , S2t.
I Determine the error-location numbers β1, β2, · · · , βν by finding the
roots of σ(x), and correct the errors in r(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 13 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
The first step of iteration is to find a minimum-degree polynomial
σ(1)(x) whose coefficients satisfy the first Newton’s identity of
(11).
The next step is to test whether coefficients of σ(1)(x) also satisfy
the second Newton’s identity of (11).
If the coefficients of σ(1)(x) do satisfy the second Newton’s identity
of (11), we set
σ(2)(x) = σ(1)(x)
If the coefficients of σ(1)(x) do not satisfy the second Newton’s
identity of (11), a correction term is added to σ(1)(x) to form
σ(2)(x) such that σ(2)(x) has minimum degree and its coefficients
satisfy the first two Newton’s identities of (11).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 14 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Therefore, at the end of the second step of iteration, we obtain a
minimum-degree polynomial σ(2)(x) whose coefficients satisfy the
first two Newton’s identities of (11).
The third step of iteration is to find a minimum-degree polynomial
σ(3)(x) from σ(2)(x) such that the coefficients of σ(3)(x) satisfy the
first three Newton’s identities of (11).
We test whether the coefficients of σ(2)(x) satisfy the third
Newton’s identity of (11). If they do, we set σ(3)(x) = σ(2)(x).
If they do not, a correction term is added to σ(2)(x) to form
σ(3)(x).
Iteration continues until σ(2t)(x) is obtained.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 15 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Then σ(2t)(x) is taken to be the error-location polynomial σ(x),
that is,
σ(x) = σ(2t)(x)
This σ(x) will yield an error pattern e(x) of minimum weight that
satisfies the equations of (7).
If the number of errors in the received polynomial r(x) is t or less,
then σ(x) produces the true error pattern.
Let,
σ(µ)(x) = 1 + σ
(µ)
1 x + σ
(µ)
2 x2 + · · · + σ
(µ)
lµ
xlµ (12)
be the minimum degree polynomial determined at the µth step of
iteration whose coefficients satisfy the first µ Newton’s identities of
(11).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 16 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To determine σ(µ+1)(x), we compute the following quantity :
dµ = Sµ+1 + σ
(µ)
1 Sµ + σ
(µ)
2 Sµ−1 + · · · + σ
(µ)
lµ
Sµ+1−lµ (13)
This quantity dµ is called the µth discrepancy.
If dµ = 0, the coefficients of σ(µ)(x) satisfy the (µ + 1)th Newton’s
identity. We set,
σ(µ+1)(x) = σ(µ)(x)
If dµ 6= 0, the coefficients of σ(µ)(x) do not satisfy the (µ + 1)th
Newton’s identity and a correction term must be added to σ(µ)(x)
to obtain σ(µ+1)(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 17 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To accomplish this correction, we go back to the steps prior to the
µth step and determine a polynomial σ(p)(x) such that the pth
discrepancy dp 6= 0 and p − lp [lp is the degree of σ(p)(x)] has the
largest value.
Then
σ(µ+1)(x) = σ(µ)(x) + dµd−1
p x(µ−p)σ(p)(x), (14)
which is the minimum degree polynomial whose coefficients satisfy
the first µ + 1 Newton’s identities.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 18 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To carry out the iteration of finding σ(x), we fill up the following
table, where lµ is the degree of σ(µ)(x).
µ σ(µ)(x) dµ lµ µ − lµ
-1 1 1 0 -1
0 1 S1 0 0
1
2
.
.
.
2t
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 19 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Assuming that we have filled out all rows upto and including the
µth row, we fill out the (µ + 1)th row as follows :
1 If dµ = 0, then σ(µ+1)
(x) = σ(µ)
(x) and lµ+1 = lµ.
2 If dµ 6= 0, find another row p prior to the µth
row such that dp 6= 0
and the number p − lp in the last column of the table has the
largest value. Then σ(µ+1)
(x) is given by (14) and
lµ+1 =max(lµ, lp + µ − p) (15)
In either case,
dµ+1 = Sµ+2 + σ
(µ+1)
1 Sµ+1 + · · · + σ
(µ+1)
lµ+1
Sµ+2−lµ+1 , (16)
where the σ
(µ+1)
l ’s are the coefficients of σ(µ+1)(x).
The polynomial σ(2t)(x) in the last row should be the required
σ(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 20 / 26
Example
Consider the (15, 5) triple-error correcting BCH code. Assume
that the code vector of all zeros,
v = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
is transmitted and the received vector is
r = (0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0)
Then r(x) = x3 + x5 + x12.
The minimal polynomials for α, α2 and α4 are identical and
φ1(x) = φ2(x) = φ4(x) = 1 + x + x4.
The elements α3 and α6 have the same minimal polynomial,
φ3(x) = φ6(x) = 1 + x + x2 + x3 + x4.
The minimal polynomial for α5 is
φ5(x) = 1 + x + x2.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 21 / 26
Example
Dividing r(x) by φ1(x), φ3(x) and φ5(x), respectively, we obtain
the following remainders :
b1(x) = 1,
b3(x) = 1 + x2 + x3,
b5(x) = x2.
Substituting α, α2 and α4 into b1(x), we obtain the following
syndrome components :
S1 = S2 = S4 = 1.
Substituting α3 and α6 into b3(x), we obtain
S3 = 1 + α6 + α9 = α10,
S6 = 1 + α12 + α18 = α5.
Substituting α5 into b5(x), we have
S5 = α10.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 22 / 26
Example
Using the iterative procedure we obtain the below table. Thus, the
error location polynomial is
σ(x) = σ(6)(x) = 1 + x + α5x3.
µ σ(µ)(x) dµ lµ µ − lµ
-1 1 1 0 -1
0 1 1 0 0
1 1 + x 0 1 0 (take p = −1)
2 1 + x α5 1 1
3 1 + x + α5x2 0 2 1 (take p = 0)
4 1 + x + α5x2 α10 2 2
5 1 + x + α5x3 0 3 2 (take p = 2)
6 1 + x + α5x3 - - -
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 23 / 26
Example
We can easily check that α3, α10 and α12 are the roots of σ(x).
Their inverse are α12, α5, and α3 which are the error location
numbers. Therefore, the error pattern is
e(x) = x3 + x5 + x12.
Adding e(x) to the received polynomial r(x), we obtain the
all-zero code vector.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 24 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
If the number of errors in the received polynomial r(x) is less than
the designed error correcting capability t of the code, it is not
necessary to carry out the 2t steps of iteration to find the
error-location polynomial σ(x).
Let σ(µ)(x) and dµ be the solution and discrepancy obtained at the
µth step of iteration.
Let lµ be the degree of σ(µ)(x). Now, if dµ and the discrepancies at
the next t − lµ − 1 steps are all zero, σ(µ)(x) is the error location
polynomial.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 25 / 26
Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
If the number of errors in the received polynomial r(x) is ν(ν ≤ t),
only t + ν steps of iteration is needed to determine the error
location polynomial σ(x).
If ν is is small, the reduction in the number of iteration steps
results in an increase of decoding speed.
The iterative algorithm for finding σ(x) is not only applies to
binary BCH codes but also to nonbinary BCH codes.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 26 / 26

More Related Content

What's hot

Noise filtering
Noise filteringNoise filtering
Noise filtering
Alaa Ahmed
 
Spatial Filters (Digital Image Processing)
Spatial Filters (Digital Image Processing)Spatial Filters (Digital Image Processing)
Spatial Filters (Digital Image Processing)
Kalyan Acharjya
 
Bandwidth utilization
Bandwidth utilizationBandwidth utilization
Bandwidth utilization
Pallab Kumar Nandi
 
Histogram Processing
Histogram ProcessingHistogram Processing
Histogram Processing
Amnaakhaan
 
MEDIUM ACCESS CONTROL
MEDIUM ACCESS CONTROLMEDIUM ACCESS CONTROL
MEDIUM ACCESS CONTROL
junnubabu
 
video compression techique
video compression techiquevideo compression techique
video compression techique
Ashish Kumar
 
Multiplexing in mobile computing
Multiplexing in mobile computingMultiplexing in mobile computing
Multiplexing in mobile computing
ZituSahu
 
Bit plane coding
Bit plane codingBit plane coding
Bit plane coding
priyadharshini murugan
 
Mpeg 2
Mpeg 2Mpeg 2
Windowing ofdm
Windowing ofdmWindowing ofdm
Windowing ofdm
Sreeram Reddy
 
Motion Estimation - umit 5 (II).pdf
Motion Estimation  - umit 5 (II).pdfMotion Estimation  - umit 5 (II).pdf
Motion Estimation - umit 5 (II).pdf
HeenaSyed6
 
Clustering: Large Databases in data mining
Clustering: Large Databases in data miningClustering: Large Databases in data mining
Clustering: Large Databases in data mining
ZHAO Sam
 
Raster scan systems with video controller and display processor
Raster scan systems with video controller and display processorRaster scan systems with video controller and display processor
Raster scan systems with video controller and display processor
hemanth kumar
 
Basic Steps of Video Processing - unit 4 (2).pdf
Basic Steps of Video Processing - unit 4 (2).pdfBasic Steps of Video Processing - unit 4 (2).pdf
Basic Steps of Video Processing - unit 4 (2).pdf
HeenaSyed6
 
Huffman Coding
Huffman CodingHuffman Coding
Huffman Coding
anithabalaprabhu
 
OFDM
OFDMOFDM
Data cube computation
Data cube computationData cube computation
Data cube computation
Rashmi Sheikh
 
narrow Band ISDN
narrow Band ISDNnarrow Band ISDN
narrow Band ISDN
kavitha muneeshwaran
 
IEEE 802.11
IEEE 802.11IEEE 802.11
IEEE 802.11
Abhishek Pachisia
 
WCDMA
WCDMAWCDMA

What's hot (20)

Noise filtering
Noise filteringNoise filtering
Noise filtering
 
Spatial Filters (Digital Image Processing)
Spatial Filters (Digital Image Processing)Spatial Filters (Digital Image Processing)
Spatial Filters (Digital Image Processing)
 
Bandwidth utilization
Bandwidth utilizationBandwidth utilization
Bandwidth utilization
 
Histogram Processing
Histogram ProcessingHistogram Processing
Histogram Processing
 
MEDIUM ACCESS CONTROL
MEDIUM ACCESS CONTROLMEDIUM ACCESS CONTROL
MEDIUM ACCESS CONTROL
 
video compression techique
video compression techiquevideo compression techique
video compression techique
 
Multiplexing in mobile computing
Multiplexing in mobile computingMultiplexing in mobile computing
Multiplexing in mobile computing
 
Bit plane coding
Bit plane codingBit plane coding
Bit plane coding
 
Mpeg 2
Mpeg 2Mpeg 2
Mpeg 2
 
Windowing ofdm
Windowing ofdmWindowing ofdm
Windowing ofdm
 
Motion Estimation - umit 5 (II).pdf
Motion Estimation  - umit 5 (II).pdfMotion Estimation  - umit 5 (II).pdf
Motion Estimation - umit 5 (II).pdf
 
Clustering: Large Databases in data mining
Clustering: Large Databases in data miningClustering: Large Databases in data mining
Clustering: Large Databases in data mining
 
Raster scan systems with video controller and display processor
Raster scan systems with video controller and display processorRaster scan systems with video controller and display processor
Raster scan systems with video controller and display processor
 
Basic Steps of Video Processing - unit 4 (2).pdf
Basic Steps of Video Processing - unit 4 (2).pdfBasic Steps of Video Processing - unit 4 (2).pdf
Basic Steps of Video Processing - unit 4 (2).pdf
 
Huffman Coding
Huffman CodingHuffman Coding
Huffman Coding
 
OFDM
OFDMOFDM
OFDM
 
Data cube computation
Data cube computationData cube computation
Data cube computation
 
narrow Band ISDN
narrow Band ISDNnarrow Band ISDN
narrow Band ISDN
 
IEEE 802.11
IEEE 802.11IEEE 802.11
IEEE 802.11
 
WCDMA
WCDMAWCDMA
WCDMA
 

Similar to Decoding BCH-Code.pdf

Multivariate Methods Assignment Help
Multivariate Methods Assignment HelpMultivariate Methods Assignment Help
Multivariate Methods Assignment Help
Statistics Assignment Experts
 
1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf
d00a7ece
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
Sumit Singh
 
ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)
CrackDSE
 
Ch02 6
Ch02 6Ch02 6
Ch02 6
Rendy Robert
 
On approximate bounds of zeros of polynomials within
On approximate bounds of zeros of polynomials withinOn approximate bounds of zeros of polynomials within
On approximate bounds of zeros of polynomials within
eSAT Publishing House
 
4th Semester CS / IS (2013-June) Question Papers
4th Semester CS / IS (2013-June) Question Papers 4th Semester CS / IS (2013-June) Question Papers
4th Semester CS / IS (2013-June) Question Papers
BGS Institute of Technology, Adichunchanagiri University (ACU)
 
05_AJMS_332_21.pdf
05_AJMS_332_21.pdf05_AJMS_332_21.pdf
05_AJMS_332_21.pdf
BRNSS Publication Hub
 
Notions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systemsNotions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systems
Stavros Vologiannidis
 
maths 12th.pdf
maths 12th.pdfmaths 12th.pdf
maths 12th.pdf
Ibrahim Ali Saify
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer keyNbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
MD Kutubuddin Sardar
 
machinelearning project
machinelearning projectmachinelearning project
machinelearning project
Lianli Liu
 
Paper computer
Paper computerPaper computer
Paper computer
bikram ...
 
Paper computer
Paper computerPaper computer
Paper computer
bikram ...
 
Fixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractionsFixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractions
Alexander Decker
 
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Igor Moiseev
 
ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)
CrackDSE
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
BRNSS Publication Hub
 
Rosser's theorem
Rosser's theoremRosser's theorem
Rosser's theorem
Wathna
 

Similar to Decoding BCH-Code.pdf (20)

Multivariate Methods Assignment Help
Multivariate Methods Assignment HelpMultivariate Methods Assignment Help
Multivariate Methods Assignment Help
 
1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf1- Matrices and their Applications.pdf
1- Matrices and their Applications.pdf
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
 
ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)
 
Ch02 6
Ch02 6Ch02 6
Ch02 6
 
On approximate bounds of zeros of polynomials within
On approximate bounds of zeros of polynomials withinOn approximate bounds of zeros of polynomials within
On approximate bounds of zeros of polynomials within
 
4th Semester CS / IS (2013-June) Question Papers
4th Semester CS / IS (2013-June) Question Papers 4th Semester CS / IS (2013-June) Question Papers
4th Semester CS / IS (2013-June) Question Papers
 
05_AJMS_332_21.pdf
05_AJMS_332_21.pdf05_AJMS_332_21.pdf
05_AJMS_332_21.pdf
 
Notions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systemsNotions of equivalence for linear multivariable systems
Notions of equivalence for linear multivariable systems
 
maths 12th.pdf
maths 12th.pdfmaths 12th.pdf
maths 12th.pdf
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer keyNbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
Nbhm m. a. and m.sc. scholarship test september 20, 2014 with answer key
 
machinelearning project
machinelearning projectmachinelearning project
machinelearning project
 
Paper computer
Paper computerPaper computer
Paper computer
 
Paper computer
Paper computerPaper computer
Paper computer
 
Fixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractionsFixed points theorem on a pair of random generalized non linear contractions
Fixed points theorem on a pair of random generalized non linear contractions
 
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
Singularities in the one control problem. S.I.S.S.A., Trieste August 16, 2007.
 
ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
 
Rosser's theorem
Rosser's theoremRosser's theorem
Rosser's theorem
 

Recently uploaded

Supermarket Management System Project Report.pdf
Supermarket Management System Project Report.pdfSupermarket Management System Project Report.pdf
Supermarket Management System Project Report.pdf
Kamal Acharya
 
Blood finder application project report (1).pdf
Blood finder application project report (1).pdfBlood finder application project report (1).pdf
Blood finder application project report (1).pdf
Kamal Acharya
 
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
Paris Salesforce Developer Group
 
SENTIMENT ANALYSIS ON PPT AND Project template_.pptx
SENTIMENT ANALYSIS ON PPT AND Project template_.pptxSENTIMENT ANALYSIS ON PPT AND Project template_.pptx
SENTIMENT ANALYSIS ON PPT AND Project template_.pptx
b0754201
 
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
upoux
 
一比一原版(USF毕业证)旧金山大学毕业证如何办理
一比一原版(USF毕业证)旧金山大学毕业证如何办理一比一原版(USF毕业证)旧金山大学毕业证如何办理
一比一原版(USF毕业证)旧金山大学毕业证如何办理
uqyfuc
 
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
ydzowc
 
OOPS_Lab_Manual - programs using C++ programming language
OOPS_Lab_Manual - programs using C++ programming languageOOPS_Lab_Manual - programs using C++ programming language
OOPS_Lab_Manual - programs using C++ programming language
PreethaV16
 
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
Gino153088
 
Mechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdfMechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdf
21UME003TUSHARDEB
 
Applications of artificial Intelligence in Mechanical Engineering.pdf
Applications of artificial Intelligence in Mechanical Engineering.pdfApplications of artificial Intelligence in Mechanical Engineering.pdf
Applications of artificial Intelligence in Mechanical Engineering.pdf
Atif Razi
 
smart pill dispenser is designed to improve medication adherence and safety f...
smart pill dispenser is designed to improve medication adherence and safety f...smart pill dispenser is designed to improve medication adherence and safety f...
smart pill dispenser is designed to improve medication adherence and safety f...
um7474492
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Sinan KOZAK
 
Generative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdfGenerative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdf
mahaffeycheryld
 
Transformers design and coooling methods
Transformers design and coooling methodsTransformers design and coooling methods
Transformers design and coooling methods
Roger Rozario
 
Null Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAMNull Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAM
Divyanshu
 
An Introduction to the Compiler Designss
An Introduction to the Compiler DesignssAn Introduction to the Compiler Designss
An Introduction to the Compiler Designss
ElakkiaU
 
AI-Based Home Security System : Home security
AI-Based Home Security System : Home securityAI-Based Home Security System : Home security
AI-Based Home Security System : Home security
AIRCC Publishing Corporation
 
Height and depth gauge linear metrology.pdf
Height and depth gauge linear metrology.pdfHeight and depth gauge linear metrology.pdf
Height and depth gauge linear metrology.pdf
q30122000
 
Object Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOADObject Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOAD
PreethaV16
 

Recently uploaded (20)

Supermarket Management System Project Report.pdf
Supermarket Management System Project Report.pdfSupermarket Management System Project Report.pdf
Supermarket Management System Project Report.pdf
 
Blood finder application project report (1).pdf
Blood finder application project report (1).pdfBlood finder application project report (1).pdf
Blood finder application project report (1).pdf
 
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
 
SENTIMENT ANALYSIS ON PPT AND Project template_.pptx
SENTIMENT ANALYSIS ON PPT AND Project template_.pptxSENTIMENT ANALYSIS ON PPT AND Project template_.pptx
SENTIMENT ANALYSIS ON PPT AND Project template_.pptx
 
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
 
一比一原版(USF毕业证)旧金山大学毕业证如何办理
一比一原版(USF毕业证)旧金山大学毕业证如何办理一比一原版(USF毕业证)旧金山大学毕业证如何办理
一比一原版(USF毕业证)旧金山大学毕业证如何办理
 
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
 
OOPS_Lab_Manual - programs using C++ programming language
OOPS_Lab_Manual - programs using C++ programming languageOOPS_Lab_Manual - programs using C++ programming language
OOPS_Lab_Manual - programs using C++ programming language
 
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
 
Mechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdfMechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdf
 
Applications of artificial Intelligence in Mechanical Engineering.pdf
Applications of artificial Intelligence in Mechanical Engineering.pdfApplications of artificial Intelligence in Mechanical Engineering.pdf
Applications of artificial Intelligence in Mechanical Engineering.pdf
 
smart pill dispenser is designed to improve medication adherence and safety f...
smart pill dispenser is designed to improve medication adherence and safety f...smart pill dispenser is designed to improve medication adherence and safety f...
smart pill dispenser is designed to improve medication adherence and safety f...
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
 
Generative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdfGenerative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdf
 
Transformers design and coooling methods
Transformers design and coooling methodsTransformers design and coooling methods
Transformers design and coooling methods
 
Null Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAMNull Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAM
 
An Introduction to the Compiler Designss
An Introduction to the Compiler DesignssAn Introduction to the Compiler Designss
An Introduction to the Compiler Designss
 
AI-Based Home Security System : Home security
AI-Based Home Security System : Home securityAI-Based Home Security System : Home security
AI-Based Home Security System : Home security
 
Height and depth gauge linear metrology.pdf
Height and depth gauge linear metrology.pdfHeight and depth gauge linear metrology.pdf
Height and depth gauge linear metrology.pdf
 
Object Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOADObject Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOAD
 

Decoding BCH-Code.pdf

  • 1. Decoding of the BCH Codes Raju Hazari Department of Computer Science and Engineering National Institute of Technology Calicut March 30, 2023 Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 1 / 26
  • 2. Syndrome Calculation Suppose that a code word v(x) = v0 + v1x + v2x2 + · · · + vn−1xn−1 is transmitted and the transmission errors result in the following received vector : r(x) = r0 + r1x + r2x2 + · · · + rn−1xn−1. Let e(x) be the error pattern. Then r(x) = v(x) + e(x). (1) The first step of decoding a code is to compute the syndrome from the received vector r(x). For decoding a t-error correcting primitive BCH code, the syndrome is a 2t-tuple, S = (S1, S2, · · · , S2t) = r.HT , (2) Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 2 / 26
  • 3. Syndrome Calculation We find that the ith component of the syndrome is Si = r(αi ) = r0 + r1αi + r2α2i + · · · + rn−1α(n−1)i (3) for 1 ≤ i ≤ 2t. Note that the syndrome components are elements in the field GF(2m ). These components can be computed from r(x) as follows. Dividing r(x) by the minimal polynomial φi(x) of αi , we obtain r(x) = ai(x)φi(x) + bi(x), where bi(x) is the remainder with degree less than that of φi(x). Since φi(αi ) = 0, we have Si = r(αi ) = bi(αi ). (4) Thus, the syndrome component Si is obtained by evaluating bi(x) with x = αi . Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 3 / 26
  • 4. Syndrome Calculation (Example) Consider the double-error correcting (15, 7) BCH code. Suppose that the vector r = (1 0 0 0 0 0 0 0 1 0 0 0 0 0 0) is received. The corresponding polynomial is r(x) = 1 + x8 The syndrome consists of four components, S = (S1, S2, S3, S4) The minimal polynomials for α, α2 and α4 are identical and φ1(x) = φ2(x) = φ4(x) = 1 + x + x4. The minimal polynomial of α3 is φ3(x) = 1 + x + x2 + x3 + x4. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 4 / 26
  • 5. Syndrome Calculation (Example) Dividing r(x) = 1 + x8 by φ1(x) = 1 + x + x4, the remainder is b1(x) = x2 Dividing r(x) = 1 + x8 by φ3(x) = 1 + x + x2 + x3 + x4, the remainder is b3(x) = 1 + x3. Substituting α, α2, and α4 into b1(x), we obtain S1 = α2, S2 = α4, S4 = α8. Substituting α3 into b3(x), we obtain S3 = 1 + α9 = 1 + α + α3 = α7. Thus, S = (α2, α4, α7, α8) Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 5 / 26
  • 6. Decoding Algorithm for the BCH Codes Since α, α2, · · · , α2t are roots of each code polynomial, v(αi) = 0 for 1 ≤ i ≤ 2t. From (1) and (3), we obtain the following relationship between the syndrome components and the error pattern : Si = e(αi) (5) for 1 ≤ i ≤ 2t. From (5) we see that the syndrome S depends on the error pattern e only. Suppose that the error pattern e(x) has ν errors at locations xj1 , xj2 , · · · , xjν , that is, e(x) = xj1 + xj2 + · · · + xjν , (6) where 0 ≤ j1 < j2 < · · · jν < n. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 6 / 26
  • 7. Decoding Algorithm for the BCH Codes From (5) and (6), we obtain the following set of equations : S1 = αj1 + αj2 + · · · + αjν S2 = (αj1 )2 + (αj2 )2 + · · · + (αjν )2 S3 = (αj1 )3 + (αj2 )3 + · · · + (αjν )3 . . . (7) S2t = (αj1 )2t + (αj2 )2t + · · · + (αjν )2t, where αj1 , αj2 , · · · , αjν are unknown. Any method for solving these equations is a decoding algorithm for the BCH codes. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 7 / 26
  • 8. Decoding Algorithm for the BCH Codes Once αj1 , αj2 , · · · , αjν have been found, the powers j1, j2, · · · , jν tell us the error locations in e(x). In general, the equations of (7) have many possible solutions (2k of them). Each solution yields a different error pattern. If the number of errors in the actual error pattern e(x) is t or less, the solution that yields an error pattern with the smallest number of errors is the right solution. That is, the error pattern corresponding to this solution is the most probable error pattern e(x) caused by the channel noise. For large t, solving the equations of (7) directly is difficult and ineffective. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 8 / 26
  • 9. Decoding Algorithm for the BCH Codes Following is an effective procedure to determine αjl for l = 1, 2, · · · , ν from the syndrome components Si’s. Let βl = αjl for 1 ≤ l ≤ ν. We call these elements the error location numbers since they tell us the locations of the errors. Now the equations of (7) can be expressed in the following form : S1 = β1 + β2 + · · · + βν S2 = β2 1 + β2 2 + · · · + β2 ν . . . (8) S2t = β2t 1 + β2t 2 + · · · + β2t ν These 2t equations are symmetric functions in β1, β2, · · · , βν, which are known as power-sum symmetric functions. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 9 / 26
  • 10. Decoding Algorithm for the BCH Codes Now, we define the following polynomial : σ(x) = (1 + β1x)(1 + β2x) · · · (1 + βνx) = σ0 + σ1x + σ2x2 + · · · + σνxν (9) The roots of σ(x) are β−1 1 , β−1 2 , · · · , β−1 ν , which are the inverse of the error location numbers. For this reason, σ(x) is called the error-location polynomial. The coefficients of σ(x) and error-location numbers are related by the following equations : σ0 = 1 σ1 = β1 + β2 + · · · + βν σ2 = β1β2 + β2β3 + · · · + βν−1βν . . . (10) σν = β1β2 · · · βν. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 10 / 26
  • 11. Decoding Algorithm for the BCH Codes The σi’s are known as elementary symmetric functions of βl’s. From (8) and (10), we see that the σi’s are related to the syndrome components Sj’s. They are related to the syndrome components by the following Newton’s identities : S1 + σ1 = 0 S2 + σ1S1 + 2σ2 = 0 S3 + σ1S2 + σ2S1 + 3σ3 = 0 . . . (11) Sν + σ1Sν−1 + · · · + σν−1S1 + νσν = 0 Sν+1 + σ1Sν + · · · + σν−1S2 + σνS1 = 0 . . . Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 11 / 26
  • 12. Decoding Algorithm for the BCH Codes If it is possible to determine the elementary symmetric functions σ1, σ2, · · · , σν from the equations of (11), the error location numbers β1, β2, · · · , βν can be found by determining the roots of the error-location polynomial σ(x). The equations of (11) may have many solutions; however, we want to find the solution that yields a σ(x) of minimal degree. This σ(x) will produce an error pattern with a minimum number of errors. If ν ≤ t, this σ(x) will give the actual error pattern e(x). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 12 / 26
  • 13. Outline of the error-correcting procedure for BCH codes The procedure consists of three major steps : I Compute the syndrome S = (S1, S2, · · · , S2t) from the received polynomial r(x). I Determine the error-location polynomial σ(x) from the syndrome components S1, S2, · · · , S2t. I Determine the error-location numbers β1, β2, · · · , βν by finding the roots of σ(x), and correct the errors in r(x). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 13 / 26
  • 14. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] The first step of iteration is to find a minimum-degree polynomial σ(1)(x) whose coefficients satisfy the first Newton’s identity of (11). The next step is to test whether coefficients of σ(1)(x) also satisfy the second Newton’s identity of (11). If the coefficients of σ(1)(x) do satisfy the second Newton’s identity of (11), we set σ(2)(x) = σ(1)(x) If the coefficients of σ(1)(x) do not satisfy the second Newton’s identity of (11), a correction term is added to σ(1)(x) to form σ(2)(x) such that σ(2)(x) has minimum degree and its coefficients satisfy the first two Newton’s identities of (11). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 14 / 26
  • 15. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] Therefore, at the end of the second step of iteration, we obtain a minimum-degree polynomial σ(2)(x) whose coefficients satisfy the first two Newton’s identities of (11). The third step of iteration is to find a minimum-degree polynomial σ(3)(x) from σ(2)(x) such that the coefficients of σ(3)(x) satisfy the first three Newton’s identities of (11). We test whether the coefficients of σ(2)(x) satisfy the third Newton’s identity of (11). If they do, we set σ(3)(x) = σ(2)(x). If they do not, a correction term is added to σ(2)(x) to form σ(3)(x). Iteration continues until σ(2t)(x) is obtained. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 15 / 26
  • 16. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] Then σ(2t)(x) is taken to be the error-location polynomial σ(x), that is, σ(x) = σ(2t)(x) This σ(x) will yield an error pattern e(x) of minimum weight that satisfies the equations of (7). If the number of errors in the received polynomial r(x) is t or less, then σ(x) produces the true error pattern. Let, σ(µ)(x) = 1 + σ (µ) 1 x + σ (µ) 2 x2 + · · · + σ (µ) lµ xlµ (12) be the minimum degree polynomial determined at the µth step of iteration whose coefficients satisfy the first µ Newton’s identities of (11). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 16 / 26
  • 17. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] To determine σ(µ+1)(x), we compute the following quantity : dµ = Sµ+1 + σ (µ) 1 Sµ + σ (µ) 2 Sµ−1 + · · · + σ (µ) lµ Sµ+1−lµ (13) This quantity dµ is called the µth discrepancy. If dµ = 0, the coefficients of σ(µ)(x) satisfy the (µ + 1)th Newton’s identity. We set, σ(µ+1)(x) = σ(µ)(x) If dµ 6= 0, the coefficients of σ(µ)(x) do not satisfy the (µ + 1)th Newton’s identity and a correction term must be added to σ(µ)(x) to obtain σ(µ+1)(x). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 17 / 26
  • 18. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] To accomplish this correction, we go back to the steps prior to the µth step and determine a polynomial σ(p)(x) such that the pth discrepancy dp 6= 0 and p − lp [lp is the degree of σ(p)(x)] has the largest value. Then σ(µ+1)(x) = σ(µ)(x) + dµd−1 p x(µ−p)σ(p)(x), (14) which is the minimum degree polynomial whose coefficients satisfy the first µ + 1 Newton’s identities. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 18 / 26
  • 19. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] To carry out the iteration of finding σ(x), we fill up the following table, where lµ is the degree of σ(µ)(x). µ σ(µ)(x) dµ lµ µ − lµ -1 1 1 0 -1 0 1 S1 0 0 1 2 . . . 2t Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 19 / 26
  • 20. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] Assuming that we have filled out all rows upto and including the µth row, we fill out the (µ + 1)th row as follows : 1 If dµ = 0, then σ(µ+1) (x) = σ(µ) (x) and lµ+1 = lµ. 2 If dµ 6= 0, find another row p prior to the µth row such that dp 6= 0 and the number p − lp in the last column of the table has the largest value. Then σ(µ+1) (x) is given by (14) and lµ+1 =max(lµ, lp + µ − p) (15) In either case, dµ+1 = Sµ+2 + σ (µ+1) 1 Sµ+1 + · · · + σ (µ+1) lµ+1 Sµ+2−lµ+1 , (16) where the σ (µ+1) l ’s are the coefficients of σ(µ+1)(x). The polynomial σ(2t)(x) in the last row should be the required σ(x). Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 20 / 26
  • 21. Example Consider the (15, 5) triple-error correcting BCH code. Assume that the code vector of all zeros, v = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) is transmitted and the received vector is r = (0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0) Then r(x) = x3 + x5 + x12. The minimal polynomials for α, α2 and α4 are identical and φ1(x) = φ2(x) = φ4(x) = 1 + x + x4. The elements α3 and α6 have the same minimal polynomial, φ3(x) = φ6(x) = 1 + x + x2 + x3 + x4. The minimal polynomial for α5 is φ5(x) = 1 + x + x2. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 21 / 26
  • 22. Example Dividing r(x) by φ1(x), φ3(x) and φ5(x), respectively, we obtain the following remainders : b1(x) = 1, b3(x) = 1 + x2 + x3, b5(x) = x2. Substituting α, α2 and α4 into b1(x), we obtain the following syndrome components : S1 = S2 = S4 = 1. Substituting α3 and α6 into b3(x), we obtain S3 = 1 + α6 + α9 = α10, S6 = 1 + α12 + α18 = α5. Substituting α5 into b5(x), we have S5 = α10. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 22 / 26
  • 23. Example Using the iterative procedure we obtain the below table. Thus, the error location polynomial is σ(x) = σ(6)(x) = 1 + x + α5x3. µ σ(µ)(x) dµ lµ µ − lµ -1 1 1 0 -1 0 1 1 0 0 1 1 + x 0 1 0 (take p = −1) 2 1 + x α5 1 1 3 1 + x + α5x2 0 2 1 (take p = 0) 4 1 + x + α5x2 α10 2 2 5 1 + x + α5x3 0 3 2 (take p = 2) 6 1 + x + α5x3 - - - Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 23 / 26
  • 24. Example We can easily check that α3, α10 and α12 are the roots of σ(x). Their inverse are α12, α5, and α3 which are the error location numbers. Therefore, the error pattern is e(x) = x3 + x5 + x12. Adding e(x) to the received polynomial r(x), we obtain the all-zero code vector. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 24 / 26
  • 25. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] If the number of errors in the received polynomial r(x) is less than the designed error correcting capability t of the code, it is not necessary to carry out the 2t steps of iteration to find the error-location polynomial σ(x). Let σ(µ)(x) and dµ be the solution and discrepancy obtained at the µth step of iteration. Let lµ be the degree of σ(µ)(x). Now, if dµ and the discrepancies at the next t − lµ − 1 steps are all zero, σ(µ)(x) is the error location polynomial. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 25 / 26
  • 26. Iterative Algorithm for Finding the Error-location Polynomial σ(x)[Berlekamp’s iterative algorithm] If the number of errors in the received polynomial r(x) is ν(ν ≤ t), only t + ν steps of iteration is needed to determine the error location polynomial σ(x). If ν is is small, the reduction in the number of iteration steps results in an increase of decoding speed. The iterative algorithm for finding σ(x) is not only applies to binary BCH codes but also to nonbinary BCH codes. Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 26 / 26