Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Decoding BCH-Code.pdf
1. Decoding of the BCH Codes
Raju Hazari
Department of Computer Science and Engineering
National Institute of Technology Calicut
March 30, 2023
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 1 / 26
2. Syndrome Calculation
Suppose that a code word v(x) = v0 + v1x + v2x2 + · · · + vn−1xn−1
is transmitted and the transmission errors result in the following
received vector :
r(x) = r0 + r1x + r2x2 + · · · + rn−1xn−1.
Let e(x) be the error pattern. Then
r(x) = v(x) + e(x). (1)
The first step of decoding a code is to compute the syndrome from
the received vector r(x).
For decoding a t-error correcting primitive BCH code, the
syndrome is a 2t-tuple,
S = (S1, S2, · · · , S2t) = r.HT
, (2)
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 2 / 26
3. Syndrome Calculation
We find that the ith
component of the syndrome is
Si = r(αi
)
= r0 + r1αi
+ r2α2i
+ · · · + rn−1α(n−1)i
(3)
for 1 ≤ i ≤ 2t.
Note that the syndrome components are elements in the field GF(2m
).
These components can be computed from r(x) as follows.
Dividing r(x) by the minimal polynomial φi(x) of αi
, we obtain
r(x) = ai(x)φi(x) + bi(x),
where bi(x) is the remainder with degree less than that of φi(x).
Since φi(αi
) = 0, we have
Si = r(αi
) = bi(αi
). (4)
Thus, the syndrome component Si is obtained by evaluating bi(x) with
x = αi
.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 3 / 26
4. Syndrome Calculation (Example)
Consider the double-error correcting (15, 7) BCH code. Suppose
that the vector
r = (1 0 0 0 0 0 0 0 1 0 0 0 0 0 0)
is received.
The corresponding polynomial is r(x) = 1 + x8
The syndrome consists of four components,
S = (S1, S2, S3, S4)
The minimal polynomials for α, α2 and α4 are identical and
φ1(x) = φ2(x) = φ4(x) = 1 + x + x4.
The minimal polynomial of α3 is
φ3(x) = 1 + x + x2 + x3 + x4.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 4 / 26
5. Syndrome Calculation (Example)
Dividing r(x) = 1 + x8 by φ1(x) = 1 + x + x4, the remainder is
b1(x) = x2
Dividing r(x) = 1 + x8 by φ3(x) = 1 + x + x2 + x3 + x4, the
remainder is
b3(x) = 1 + x3.
Substituting α, α2, and α4 into b1(x), we obtain
S1 = α2, S2 = α4, S4 = α8.
Substituting α3 into b3(x), we obtain
S3 = 1 + α9 = 1 + α + α3 = α7.
Thus,
S = (α2, α4, α7, α8)
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 5 / 26
6. Decoding Algorithm for the BCH Codes
Since α, α2, · · · , α2t are roots of each code polynomial, v(αi) = 0
for 1 ≤ i ≤ 2t.
From (1) and (3), we obtain the following relationship between the
syndrome components and the error pattern :
Si = e(αi) (5)
for 1 ≤ i ≤ 2t.
From (5) we see that the syndrome S depends on the error pattern
e only.
Suppose that the error pattern e(x) has ν errors at locations
xj1 , xj2 , · · · , xjν , that is,
e(x) = xj1 + xj2 + · · · + xjν , (6)
where 0 ≤ j1 < j2 < · · · jν < n.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 6 / 26
7. Decoding Algorithm for the BCH Codes
From (5) and (6), we obtain the following set of equations :
S1 = αj1 + αj2 + · · · + αjν
S2 = (αj1 )2 + (αj2 )2 + · · · + (αjν )2
S3 = (αj1 )3 + (αj2 )3 + · · · + (αjν )3
.
.
. (7)
S2t = (αj1 )2t + (αj2 )2t + · · · + (αjν )2t,
where αj1 , αj2 , · · · , αjν are unknown.
Any method for solving these equations is a decoding algorithm for
the BCH codes.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 7 / 26
8. Decoding Algorithm for the BCH Codes
Once αj1 , αj2 , · · · , αjν have been found, the powers j1, j2, · · · , jν tell
us the error locations in e(x).
In general, the equations of (7) have many possible solutions (2k of
them).
Each solution yields a different error pattern.
If the number of errors in the actual error pattern e(x) is t or less,
the solution that yields an error pattern with the smallest number
of errors is the right solution.
That is, the error pattern corresponding to this solution is the
most probable error pattern e(x) caused by the channel noise.
For large t, solving the equations of (7) directly is difficult and
ineffective.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 8 / 26
9. Decoding Algorithm for the BCH Codes
Following is an effective procedure to determine αjl for
l = 1, 2, · · · , ν from the syndrome components Si’s.
Let βl = αjl for 1 ≤ l ≤ ν. We call these elements the error
location numbers since they tell us the locations of the errors.
Now the equations of (7) can be expressed in the following form :
S1 = β1 + β2 + · · · + βν
S2 = β2
1 + β2
2 + · · · + β2
ν
.
.
. (8)
S2t = β2t
1 + β2t
2 + · · · + β2t
ν
These 2t equations are symmetric functions in β1, β2, · · · , βν,
which are known as power-sum symmetric functions.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 9 / 26
10. Decoding Algorithm for the BCH Codes
Now, we define the following polynomial :
σ(x) = (1 + β1x)(1 + β2x) · · · (1 + βνx)
= σ0 + σ1x + σ2x2 + · · · + σνxν (9)
The roots of σ(x) are β−1
1 , β−1
2 , · · · , β−1
ν , which are the inverse of
the error location numbers. For this reason, σ(x) is called the
error-location polynomial.
The coefficients of σ(x) and error-location numbers are related by
the following equations :
σ0 = 1
σ1 = β1 + β2 + · · · + βν
σ2 = β1β2 + β2β3 + · · · + βν−1βν
.
.
. (10)
σν = β1β2 · · · βν.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 10 / 26
11. Decoding Algorithm for the BCH Codes
The σi’s are known as elementary symmetric functions of βl’s.
From (8) and (10), we see that the σi’s are related to the
syndrome components Sj’s.
They are related to the syndrome components by the following
Newton’s identities :
S1 + σ1 = 0
S2 + σ1S1 + 2σ2 = 0
S3 + σ1S2 + σ2S1 + 3σ3 = 0
.
.
. (11)
Sν + σ1Sν−1 + · · · + σν−1S1 + νσν = 0
Sν+1 + σ1Sν + · · · + σν−1S2 + σνS1 = 0
.
.
.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 11 / 26
12. Decoding Algorithm for the BCH Codes
If it is possible to determine the elementary symmetric functions
σ1, σ2, · · · , σν from the equations of (11), the error location
numbers β1, β2, · · · , βν can be found by determining the roots of
the error-location polynomial σ(x).
The equations of (11) may have many solutions; however, we want
to find the solution that yields a σ(x) of minimal degree.
This σ(x) will produce an error pattern with a minimum number
of errors. If ν ≤ t, this σ(x) will give the actual error pattern e(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 12 / 26
13. Outline of the error-correcting procedure for BCH codes
The procedure consists of three major steps :
I Compute the syndrome S = (S1, S2, · · · , S2t) from the received
polynomial r(x).
I Determine the error-location polynomial σ(x) from the syndrome
components S1, S2, · · · , S2t.
I Determine the error-location numbers β1, β2, · · · , βν by finding the
roots of σ(x), and correct the errors in r(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 13 / 26
14. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
The first step of iteration is to find a minimum-degree polynomial
σ(1)(x) whose coefficients satisfy the first Newton’s identity of
(11).
The next step is to test whether coefficients of σ(1)(x) also satisfy
the second Newton’s identity of (11).
If the coefficients of σ(1)(x) do satisfy the second Newton’s identity
of (11), we set
σ(2)(x) = σ(1)(x)
If the coefficients of σ(1)(x) do not satisfy the second Newton’s
identity of (11), a correction term is added to σ(1)(x) to form
σ(2)(x) such that σ(2)(x) has minimum degree and its coefficients
satisfy the first two Newton’s identities of (11).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 14 / 26
15. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Therefore, at the end of the second step of iteration, we obtain a
minimum-degree polynomial σ(2)(x) whose coefficients satisfy the
first two Newton’s identities of (11).
The third step of iteration is to find a minimum-degree polynomial
σ(3)(x) from σ(2)(x) such that the coefficients of σ(3)(x) satisfy the
first three Newton’s identities of (11).
We test whether the coefficients of σ(2)(x) satisfy the third
Newton’s identity of (11). If they do, we set σ(3)(x) = σ(2)(x).
If they do not, a correction term is added to σ(2)(x) to form
σ(3)(x).
Iteration continues until σ(2t)(x) is obtained.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 15 / 26
16. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Then σ(2t)(x) is taken to be the error-location polynomial σ(x),
that is,
σ(x) = σ(2t)(x)
This σ(x) will yield an error pattern e(x) of minimum weight that
satisfies the equations of (7).
If the number of errors in the received polynomial r(x) is t or less,
then σ(x) produces the true error pattern.
Let,
σ(µ)(x) = 1 + σ
(µ)
1 x + σ
(µ)
2 x2 + · · · + σ
(µ)
lµ
xlµ (12)
be the minimum degree polynomial determined at the µth step of
iteration whose coefficients satisfy the first µ Newton’s identities of
(11).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 16 / 26
17. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To determine σ(µ+1)(x), we compute the following quantity :
dµ = Sµ+1 + σ
(µ)
1 Sµ + σ
(µ)
2 Sµ−1 + · · · + σ
(µ)
lµ
Sµ+1−lµ (13)
This quantity dµ is called the µth discrepancy.
If dµ = 0, the coefficients of σ(µ)(x) satisfy the (µ + 1)th Newton’s
identity. We set,
σ(µ+1)(x) = σ(µ)(x)
If dµ 6= 0, the coefficients of σ(µ)(x) do not satisfy the (µ + 1)th
Newton’s identity and a correction term must be added to σ(µ)(x)
to obtain σ(µ+1)(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 17 / 26
18. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To accomplish this correction, we go back to the steps prior to the
µth step and determine a polynomial σ(p)(x) such that the pth
discrepancy dp 6= 0 and p − lp [lp is the degree of σ(p)(x)] has the
largest value.
Then
σ(µ+1)(x) = σ(µ)(x) + dµd−1
p x(µ−p)σ(p)(x), (14)
which is the minimum degree polynomial whose coefficients satisfy
the first µ + 1 Newton’s identities.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 18 / 26
19. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
To carry out the iteration of finding σ(x), we fill up the following
table, where lµ is the degree of σ(µ)(x).
µ σ(µ)(x) dµ lµ µ − lµ
-1 1 1 0 -1
0 1 S1 0 0
1
2
.
.
.
2t
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 19 / 26
20. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
Assuming that we have filled out all rows upto and including the
µth row, we fill out the (µ + 1)th row as follows :
1 If dµ = 0, then σ(µ+1)
(x) = σ(µ)
(x) and lµ+1 = lµ.
2 If dµ 6= 0, find another row p prior to the µth
row such that dp 6= 0
and the number p − lp in the last column of the table has the
largest value. Then σ(µ+1)
(x) is given by (14) and
lµ+1 =max(lµ, lp + µ − p) (15)
In either case,
dµ+1 = Sµ+2 + σ
(µ+1)
1 Sµ+1 + · · · + σ
(µ+1)
lµ+1
Sµ+2−lµ+1 , (16)
where the σ
(µ+1)
l ’s are the coefficients of σ(µ+1)(x).
The polynomial σ(2t)(x) in the last row should be the required
σ(x).
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 20 / 26
21. Example
Consider the (15, 5) triple-error correcting BCH code. Assume
that the code vector of all zeros,
v = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
is transmitted and the received vector is
r = (0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0)
Then r(x) = x3 + x5 + x12.
The minimal polynomials for α, α2 and α4 are identical and
φ1(x) = φ2(x) = φ4(x) = 1 + x + x4.
The elements α3 and α6 have the same minimal polynomial,
φ3(x) = φ6(x) = 1 + x + x2 + x3 + x4.
The minimal polynomial for α5 is
φ5(x) = 1 + x + x2.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 21 / 26
22. Example
Dividing r(x) by φ1(x), φ3(x) and φ5(x), respectively, we obtain
the following remainders :
b1(x) = 1,
b3(x) = 1 + x2 + x3,
b5(x) = x2.
Substituting α, α2 and α4 into b1(x), we obtain the following
syndrome components :
S1 = S2 = S4 = 1.
Substituting α3 and α6 into b3(x), we obtain
S3 = 1 + α6 + α9 = α10,
S6 = 1 + α12 + α18 = α5.
Substituting α5 into b5(x), we have
S5 = α10.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 22 / 26
23. Example
Using the iterative procedure we obtain the below table. Thus, the
error location polynomial is
σ(x) = σ(6)(x) = 1 + x + α5x3.
µ σ(µ)(x) dµ lµ µ − lµ
-1 1 1 0 -1
0 1 1 0 0
1 1 + x 0 1 0 (take p = −1)
2 1 + x α5 1 1
3 1 + x + α5x2 0 2 1 (take p = 0)
4 1 + x + α5x2 α10 2 2
5 1 + x + α5x3 0 3 2 (take p = 2)
6 1 + x + α5x3 - - -
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 23 / 26
24. Example
We can easily check that α3, α10 and α12 are the roots of σ(x).
Their inverse are α12, α5, and α3 which are the error location
numbers. Therefore, the error pattern is
e(x) = x3 + x5 + x12.
Adding e(x) to the received polynomial r(x), we obtain the
all-zero code vector.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 24 / 26
25. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
If the number of errors in the received polynomial r(x) is less than
the designed error correcting capability t of the code, it is not
necessary to carry out the 2t steps of iteration to find the
error-location polynomial σ(x).
Let σ(µ)(x) and dµ be the solution and discrepancy obtained at the
µth step of iteration.
Let lµ be the degree of σ(µ)(x). Now, if dµ and the discrepancies at
the next t − lµ − 1 steps are all zero, σ(µ)(x) is the error location
polynomial.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 25 / 26
26. Iterative Algorithm for Finding the Error-location
Polynomial σ(x)[Berlekamp’s iterative algorithm]
If the number of errors in the received polynomial r(x) is ν(ν ≤ t),
only t + ν steps of iteration is needed to determine the error
location polynomial σ(x).
If ν is is small, the reduction in the number of iteration steps
results in an increase of decoding speed.
The iterative algorithm for finding σ(x) is not only applies to
binary BCH codes but also to nonbinary BCH codes.
Raju Hazari (NIT, Calicut) Coding Theory, Winter 2023 March 30, 2023 26 / 26