SlideShare a Scribd company logo
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error-Correcting codes: Application of
convolutional codes to Video Streaming
Diego Napp
Department of Mathematics, Universidad of Aveiro, Portugal
July 22, 2016
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Overview
Introduction to Error-correcting codes
Two challenges that recently emerged
Block codes vs convolutional codes
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā€¢ want to store bits on magnetic storage device
ā€¢ or send a message (sequence of zeros/ones)
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā€¢ want to store bits on magnetic storage device
ā€¢ or send a message (sequence of zeros/ones)
ā€¢ Bits get corrupted, 0 ā†’ 1 or 1 ā†’ 0, but rarely.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā€¢ want to store bits on magnetic storage device
ā€¢ or send a message (sequence of zeros/ones)
ā€¢ Bits get corrupted, 0 ā†’ 1 or 1 ā†’ 0, but rarely.
What happens when we store/send information and errors occur?
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā€¢ want to store bits on magnetic storage device
ā€¢ or send a message (sequence of zeros/ones)
ā€¢ Bits get corrupted, 0 ā†’ 1 or 1 ā†’ 0, but rarely.
What happens when we store/send information and errors occur?
can we detect them? correct?
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The International Standard Book Number (ISBN)
It can be proved that all possible valid ISBN-10ā€™s have at least two
digits diļ¬€erent from each other.
ISBN-10:
x1 āˆ’ x2x3x4 āˆ’ x5x6x7x8x9 āˆ’ x10
satisfy
10
i=1
ixi = 0 mod 11
For example, for an ISBN-10 of 0-306-40615-2:
s = (0 Ɨ 10) + (3 Ɨ 9) + (0 Ɨ 8) + (6 Ɨ 7)+
+ (4 Ɨ 6) + (0 Ɨ 5) + (6 Ɨ 4) + (1 Ɨ 3) + (5 Ɨ 2) + (2 Ɨ 1)
= 0 + 27 + 0 + 42 + 24 + 0 + 24 + 3 + 10 + 2
= 132 = 12 Ɨ 11
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā‡’
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā‡’ 1 1 0 1 Ā· Ā· Ā·
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā‡’
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā‡’ 1 1 0 1 Ā· Ā· Ā·
ā€¢ Good: Very easy Encoding / decoding
ā€¢ Bad: Rate 1/3
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā‡’
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā‡’ 1 1 0 1 Ā· Ā· Ā·
ā€¢ Good: Very easy Encoding / decoding
ā€¢ Bad: Rate 1/3
Can we do better???
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Another Solution
ā€¢ Break the message into 2 bits blocks m = u1 u2 āˆˆ F2
ā€¢ Encode each block as follows:
u āˆ’ā†’ uG
where
G =
1 0 1
0 1 1
;
(u1, u2)
1 0 1
0 1 1
= u1 u2 u1 + u2
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Another Solution
ā€¢ Break the message into 2 bits blocks m = u1 u2 āˆˆ F2
ā€¢ Encode each block as follows:
u āˆ’ā†’ uG
where
G =
1 0 1
0 1 1
;
(u1, u2)
1 0 1
0 1 1
= u1 u2 u1 + u2
ā€¢ Better Rate 2/3
ā€¢ I can detect 1 error...but I cannot correct.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā€¢ Break the message into 3 bits blocks m = 1 1 0 āˆˆ F3
ā€¢ Encode each block as follows:
u āˆ’ā†’ uG G =
ļ£«
ļ£­
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø ;
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā€¢ Break the message into 3 bits blocks m = 1 1 0 āˆˆ F3
ā€¢ Encode each block as follows:
u āˆ’ā†’ uG G =
ļ£«
ļ£­
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø ;
For example
(1, 1, 0)
ļ£«
ļ£­
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø = (1, 1, 0, 0, 1, 1);
(1, 0, 1)
ļ£«
ļ£­
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø = (1, 0, 1, 1, 0, 1);
etc...
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā€¢ Rate 3/6 = 1/2, better than before (1/3).
ā€¢ Only 23 codewords in F6
C = {(1, 0, 0, 1, 1, 0), (0, 1, 0, 1, 0, 1), (0, 0, 1, 0, 1, 1), (1, 1, 0, 0, 1, 1),
(1, 0, 1, 1, 0, 1), (0, 1, 1, 1, 1, 0), (1, 1, 1, 0, 0, 0), (0, 0, 0, 0, 0, 0)}
ā€¢ In F6 we have 26 possible vectors
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā€¢ Rate 3/6 = 1/2, better than before (1/3).
ā€¢ Only 23 codewords in F6
C = {(1, 0, 0, 1, 1, 0), (0, 1, 0, 1, 0, 1), (0, 0, 1, 0, 1, 1), (1, 1, 0, 0, 1, 1),
(1, 0, 1, 1, 0, 1), (0, 1, 1, 1, 1, 0), (1, 1, 1, 0, 0, 0), (0, 0, 0, 0, 0, 0)}
ā€¢ In F6 we have 26 possible vectors
ā€¢ Any two codewords diļ¬€er at least in 3 coordinates. I can
detect and correct 1 error!!!
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Hamming distance
The intuitive concept of ā€œclosenessā€ of two words is well
formalized through Hamming distance h(x, y) of words x, y. For
two words x, y
h(x, y) = the number of symbols x and y diļ¬€er.
A code C is a subset of Fn, F a ļ¬nite ļ¬eld. An important
parameter of C is its minimal distance.
dist(C) = min{h(x, y) | x, y āˆˆ C, x = y},
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Hamming distance
The intuitive concept of ā€œclosenessā€ of two words is well
formalized through Hamming distance h(x, y) of words x, y. For
two words x, y
h(x, y) = the number of symbols x and y diļ¬€er.
A code C is a subset of Fn, F a ļ¬nite ļ¬eld. An important
parameter of C is its minimal distance.
dist(C) = min{h(x, y) | x, y āˆˆ C, x = y},
Theorem (Basic error correcting theorem)
1. A code C can detected up to s errors if dist(C) ā‰„ s + 1.
2. A code C can correct up to t errors if dist(C) ā‰„ 2t + 1.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
An (n, k) block code C is a k-dimensional subspace of Fn and the
rows of G form a basis of C
C = ImFG = uG : u āˆˆ Fk
(1)
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
An (n, k) block code C is a k-dimensional subspace of Fn and the
rows of G form a basis of C
C = ImFG = uG : u āˆˆ Fk
(1)
Main coding theory problem
1. Construct codes that can correct a maximal number of errors
while using a minimal amount of redundancy (rate)
2. Construct codes (as above) with eļ¬ƒcient encoding and
decoding procedures
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā€¢ Coding theory develops methods to protect information
against errors.
ā€¢ Cryptography develops methods how to protect information
against an enemy (or an unauthorized user).
ā€¢ Coding theory - theory of error correcting codes - is one of the
most interesting and applied part of mathematics and
informatics.
ā€¢ All real systems that work with digitally represented data, as
CD players, TV, fax machines, internet, satelites, mobiles,
require to use error correcting codes because all real channels
are, to some extent, noisy.
ā€¢ Coding theory methods are often elegant applications of very
basic concepts and methods of (abstract) algebra.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā€¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā€¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
ā€¢ Application of convolutional codes to Distributed Storage
Systems.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā€¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
ā€¢ Application of convolutional codes to Distributed Storage
Systems.
ā€¢ Application of convolutional codes Network Coding, in
particular to Video Streaming.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā€¢ Fast-growing demand for large-scale data storage.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā€¢ Fast-growing demand for large-scale data storage.
ā€¢ Failures occur: Redundancy is needed to ensure resilience ā‡’
Coding theory.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā€¢ Fast-growing demand for large-scale data storage.
ā€¢ Failures occur: Redundancy is needed to ensure resilience ā‡’
Coding theory.
ā€¢ Data is stored over a network of nodes: Peer-to-peer or data
centers ā‡’ Distributed storage systems.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Coding Theory
ā€¢ A ļ¬le u = (u1, . . . , uk) āˆˆ Fk
q is redundantly stored across n
nodes
v = (v1, . . . , vn) = uG,
where G is the generator matrix of an (n, k)-code C.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā€¢ What happens when nodes fail? The repair problem.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā€¢ What happens when nodes fail? The repair problem.
ā€¢ Metrics:
ā€¢ Repair bandwidth
ā€¢ Storage cost
ā€¢ Locality
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā€¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā€¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
ā€¢ Locality ā‰„ 2 (if the code is not a replication).
ā€¢ There is a natural trade-oļ¬€ between distance and locality.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā€¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
ā€¢ Locality ā‰„ 2 (if the code is not a replication).
ā€¢ There is a natural trade-oļ¬€ between distance and locality.Theorem (Gopalan et. al,2012)
Let C be an (n, k) linear code with minimum distance d and
locality r. Then
n āˆ’ k + 1 āˆ’ d ā‰„
k āˆ’ 1
r
.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Pyramid Codes are optimal with respect to this bound
ā€¢ Pyramid codes were Implemented in Facebook and Windows
Azure Storage (release in 2007).
ā€¢ But if two erasures occur .... how to repair multiple erasures?
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Pyramid Codes are optimal with respect to this bound
ā€¢ Pyramid codes were Implemented in Facebook and Windows
Azure Storage (release in 2007).
ā€¢ But if two erasures occur .... how to repair multiple erasures?
ā€¢ ....Next time... Today we focus in Video Streaming.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā€¢ Explosive growth of multimedia traļ¬ƒc in general and in video
in particular.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā€¢ Explosive growth of multimedia traļ¬ƒc in general and in video
in particular.
ā€¢ Video already accounts for over 50 % of the internet traļ¬ƒc
today and mobile video traļ¬ƒc is expected to grow by a factor
of more than 20 in the next ļ¬ve years [1].
[1] Cisco: Forecast and Methodology 2012-2017.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā€¢ Strong demand for implementing highly eļ¬ƒcient approaches
for video transmission.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Network Coding
How is the best way to disseminate information over a network?
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Network Coding
How is the best way to disseminate information over a network?
Linear random network coding
It has been proven that linear coding is enough to achieve the
upper bound in multicast problems with one or more sources. It
optimizes the throughput.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā€¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā€¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
ā€¢ These packets propagate through the network. Each node
creates a random -linear combination of the packets it has
available and transmits this random combination.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā€¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
ā€¢ These packets propagate through the network. Each node
creates a random -linear combination of the packets it has
available and transmits this random combination.
ā€¢ Finally, the receiver collects such randomly generated packets
and tries to infer the set of packets injected into the network
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā€¢ Rank metric codes are matrix codes C āŠ‚ FmƗn
q , armed with
the rank distance
drank(X, Y ) = rank(X āˆ’ Y ), where X, Y āˆˆ FmƗn
q .
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā€¢ Rank metric codes are matrix codes C āŠ‚ FmƗn
q , armed with
the rank distance
drank(X, Y ) = rank(X āˆ’ Y ), where X, Y āˆˆ FmƗn
q .
ā€¢ For linear (n, k) rank metric codes over Fqm with m ā‰„ n the
following analog of the Singleton bound holds,
drank(C) ā‰¤ n āˆ’ k + 1.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā€¢ Rank metric codes are matrix codes C āŠ‚ FmƗn
q , armed with
the rank distance
drank(X, Y ) = rank(X āˆ’ Y ), where X, Y āˆˆ FmƗn
q .
ā€¢ For linear (n, k) rank metric codes over Fqm with m ā‰„ n the
following analog of the Singleton bound holds,
drank(C) ā‰¤ n āˆ’ k + 1.
ā€¢ The code that achieves this bound is called Maximum Rank
Distance (MRD). Gabidulin codes are a well-known class of
MRD codes.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā€¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā€¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā€¢ Creating dependencies among the transmitted codewords of
diļ¬€erent shots can improve the error-correction capabilities
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā€¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā€¢ Creating dependencies among the transmitted codewords of
diļ¬€erent shots can improve the error-correction capabilities
ā€¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā€¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā€¢ Creating dependencies among the transmitted codewords of
diļ¬€erent shots can improve the error-correction capabilities
ā€¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
ā€¢ Although the use of convolutional codes is widespread, its
application to video streaming (or using the rank metric) is
yet unexplored.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā€¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā€¢ Creating dependencies among the transmitted codewords of
diļ¬€erent shots can improve the error-correction capabilities
ā€¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
ā€¢ Although the use of convolutional codes is widespread, its
application to video streaming (or using the rank metric) is
yet unexplored.
ā€¢ We propose a novel scheme that add complex dependencies to
data streams in a quite simple way
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Block codes vs convolutional codes
. . . u2, u1, u0
G
āˆ’āˆ’āˆ’āˆ’ā†’ . . . v2 = u2G, v1 = u1G, v0 = u0G
represented in a polynomial fashion
Ā· Ā· Ā· + u2D2
+ u1D + u0
G
āˆ’āˆ’āˆ’āˆ’ā†’ Ā· Ā· Ā· + u2G
v2
D2
+ u1G
v1
D + u0G
v0
substitute G by G(D) = G0 + G1D + Ā· Ā· Ā· + GsDs?
...u2D2
+ u1D + u0
G(D)
āˆ’āˆ’ā†’...(u2G0 + u1G1 + u0G2)
v2
D2
+(u1G0 + u0G1)
v1
D +u0G0
v0
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Block codes vs convolutional codes
. . . u2, u1, u0
G
āˆ’āˆ’āˆ’āˆ’ā†’ . . . v2 = u2G, v1 = u1G, v0 = u0G
represented in a polynomial fashion
Ā· Ā· Ā· + u2D2
+ u1D + u0
G
āˆ’āˆ’āˆ’āˆ’ā†’ Ā· Ā· Ā· + u2G
v2
D2
+ u1G
v1
D + u0G
v0
substitute G by G(D) = G0 + G1D + Ā· Ā· Ā· + GsDs?
...u2D2
+ u1D + u0
G(D)
āˆ’āˆ’ā†’...(u2G0 + u1G1 + u0G2)
v2
D2
+(u1G0 + u0G1)
v1
D +u0G0
v0
Block codes: C = {uG} = ImFG āˆ¼ {u(D)G} = ImFG(D)
Convolutional codes: C = {u(D)G(D)} = ImF((D))G(D)
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
A matrix G(D) whose rows form a bases for C is called an encoder.
If C has rank k then we say the C has rate k/n.
C = ImF((D))G(D) = u(D)G(D) : u(D) āˆˆ Fk
((D))
= KerF[D]H(D) = {v(D) āˆˆ Fn
[D] : v(D)H(D) = 0}
where H(D) is called the parity-check of C.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
A matrix G(D) whose rows form a bases for C is called an encoder.
If C has rank k then we say the C has rate k/n.
C = ImF((D))G(D) = u(D)G(D) : u(D) āˆˆ Fk
((D))
= KerF[D]H(D) = {v(D) āˆˆ Fn
[D] : v(D)H(D) = 0}
where H(D) is called the parity-check of C.
Remark
One can also consider the ring of polynomials F[D] instead of
Laurent series F((D)) and deļ¬ne C as a F[D]-module of Fn[D].
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
A convolutional encoder is also a linear device which maps
u(0), u(1), Ā· Ā· Ā· āˆ’ā†’ v(0), v(1), . . .
In this sense it is the same as block encoders. The diļ¬€erence is
that the convolutional encoder has an internal ā€œstorage vectorā€ or
ā€œmemoryā€.
v(i) does not depend only on u(i) but also on the storage vector
x(i)
x(i + 1) = Ax(i) + Bu(i)
v(i) = Cx(i) + Eu(i) (2)
A āˆˆ FĪ“ƗĪ“, B āˆˆ FĪ“Ɨk, C āˆˆ FnƗĪ“, E āˆˆ FĪ“Ɨk.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
āŠ• // //
...0, 1, 1 // 0 //
;;
##
0

// 0
bb
||
āŠ• // //
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
āŠ• // // 1
...0, 1 // 1 //


0

// 0
bb
||
āŠ• // // 1
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
āŠ• // // 1, 1
...0 // 1 //


1

// 0
bb
||
āŠ• // // 0, 1
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
āŠ• // // 1, 1, 1
// 0 //


1

// 1
bb
||
āŠ• // // 0, 0, 1
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
Let the convolutional code be given by matrices
x1(i + 1)
x2(i + 1)
=
0 1
0 0
x1(i)
x2(i)
+
1
0
u(i)
v1(i)
v2(i)
=
1 0
1 1
x1(i)
x2(i)
+
1
1
u(i)
We can compute an encoder
G(D) = E + B(Dāˆ’1
Im āˆ’ A)āˆ’1
C = 1 + D + D2 1 + D2
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the encoder
G(D) = 1 + D + D2 1 + D2 . This encoder has degree 2 and
memory 2.
Clearly any matrix which is F(D)-equivalent to G(D) is also an
encoder.
G (D) = 1 1+D2
1+D+D2
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the generator matrix G (D). This
encoder has degree 2 and inļ¬nite memory.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the catastrophic encoder
G (D) = 1 + D3 1 + D + D2 + D3 . This encoder has
degree 3 and memory 3.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
Deļ¬nition
A generator matrix G(D) is said to be catastrophic if for every
v(D) = u(D)G(D)
supp(v(D)) is ļ¬nite ā‡’ supp(u(D)) is ļ¬nite
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
Deļ¬nition
A generator matrix G(D) is said to be catastrophic if for every
v(D) = u(D)G(D)
supp(v(D)) is ļ¬nite ā‡’ supp(u(D)) is ļ¬nite
Theorem
G(D) is non-catastrophic if G(D) admits a polynomial right
inverse.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
The encoder
G(D) =
1 + D 0 1 D
1 D 1 + D 0
is noncatastrophic as an inverse is
H(D) =
ļ£«
ļ£¬
ļ£¬
ļ£­
0 1
0 0
0 0
Dāˆ’1 1 + Dāˆ’1
ļ£¶
ļ£·
ļ£·
ļ£ø
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Historical Remarks
ā€¢ Convolutional codes were introduced by Elias (1955)
ā€¢ The theory was imperfectly understood until a series of papers
of Forney in the 70ā€™s on the algebra of the k Ɨ n matrices over
the ļ¬eld of rational functions in the delay operator D.
ā€¢ Became widespread in practice with the Viterbi decoding.
They belong to the most widely implemented codes in
(wireless) communications.
ā€¢ The ļ¬eld is typically F2 but in the last decade a renewed
interest has grown for convolutional codes over large ļ¬elds
trying to fully exploit the potential of convolutional codes.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā€¢ In block coding it is normally considered n and k large.
ā€¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā€¢ In block coding it is normally considered n and k large.
ā€¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā€¢ Decoding over the symmetric channel is, in general, diļ¬ƒcult.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā€¢ In block coding it is normally considered n and k large.
ā€¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā€¢ Decoding over the symmetric channel is, in general, diļ¬ƒcult.
ā€¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬ƒcient.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā€¢ In block coding it is normally considered n and k large.
ā€¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā€¢ Decoding over the symmetric channel is, in general, diļ¬ƒcult.
ā€¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬ƒcient.
ā€¢ Convolutional codes over large alphabets have attracted much
attention in recent years.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā€¢ In block coding it is normally considered n and k large.
ā€¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā€¢ Decoding over the symmetric channel is, in general, diļ¬ƒcult.
ā€¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬ƒcient.
ā€¢ Convolutional codes over large alphabets have attracted much
attention in recent years.
ā€¢ In [Tomas, Rosenthal, Smarandache 2012]: Decoding over the
erasure channel is easy and Viterbi is not needed, just linear
algebra.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāˆˆN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
āˆˆ F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāˆˆN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
āˆˆ F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) āˆˆ C and v(D) = 0}
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāˆˆN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
āˆˆ F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) āˆˆ C and v(D) = 0}
ā€¢ We are interested in the maximum possible value of dfree(C)
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāˆˆN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
āˆˆ F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) āˆˆ C and v(D) = 0}
ā€¢ We are interested in the maximum possible value of dfree(C)
ā€¢ For block codes (Ī“ = 0) we know that maximum value is
given by the Singleton bound: n āˆ’ k + 1
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāˆˆN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
āˆˆ F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) āˆˆ C and v(D) = 0}
ā€¢ We are interested in the maximum possible value of dfree(C)
ā€¢ For block codes (Ī“ = 0) we know that maximum value is
given by the Singleton bound: n āˆ’ k + 1
ā€¢ This bound can be achieve if |F|  n
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Theorem
Rosenthal and Smarandache (1999) showed that the free distance
of convolutional code of rate k/n and degree Ī“ must be upper
bounded by
dfree(C) ā‰¤ (n āˆ’ k)
Ī“
k
+ 1 + Ī“ + 1. (3)
A code achieving (3) is called Maximum Distance Separable (MDS)
and if it achieves it ā€as fast as possibleā€ is called strongly MDS.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Theorem
Rosenthal and Smarandache (1999) showed that the free distance
of convolutional code of rate k/n and degree Ī“ must be upper
bounded by
dfree(C) ā‰¤ (n āˆ’ k)
Ī“
k
+ 1 + Ī“ + 1. (3)
A code achieving (3) is called Maximum Distance Separable (MDS)
and if it achieves it ā€as fast as possibleā€ is called strongly MDS.
ā€¢ Allen conjecture (1999) the existence of convolutional codes
that are both sMDS and MDP when k = 1 and n = 2.
ā€¢ Rosenthal and Smarandache (2001), provided the ļ¬rst
concrete construction of MDS convolutional codes
ā€¢ Gluessing-Luerssen, et. al. (2006), provided the ļ¬rst
construction of strongly MDS when (n āˆ’ k)|Ī“.
ā€¢ Napp and Smarandache (2016) provided the ļ¬rst construction
of strongly MDS for all rates and degrees.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
Another important distance measure for a convolutional code is
the jth column distance dc
j (C), (introduced by Costello), given by
dc
j (C) = min wt(v[0,j](D)) | v(D) āˆˆ C and v0 = 0
where v[0,j](D) = v0 + v1D + . . . + vj Dj represents the j-th
truncation of the codeword v(D) āˆˆ C.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā‰¤ dc
1 ā‰¤ Ā· Ā· Ā· ā‰¤ lim
jā†’āˆž
dc
j (C) = dfree(C) ā‰¤ (n āˆ’ k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā‰¤ (n āˆ’ k)(j + 1) + 1, (4)
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā‰¤ dc
1 ā‰¤ Ā· Ā· Ā· ā‰¤ lim
jā†’āˆž
dc
j (C) = dfree(C) ā‰¤ (n āˆ’ k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā‰¤ (n āˆ’ k)(j + 1) + 1, (4)
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā‰¤ dc
1 ā‰¤ Ā· Ā· Ā· ā‰¤ lim
jā†’āˆž
dc
j (C) = dfree(C) ā‰¤ (n āˆ’ k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā‰¤ (n āˆ’ k)(j + 1) + 1, (4)
How do we construct MDP?
The construction of MDP convolutional boils down to the
construction of Superregular matrices
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
LT-Superregular matrices
Deļ¬nition [Gluesing-Luerssen,Rosenthal,Smadandache (2006)]
A lower triangular matrix
B =
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£­
a0
a1 a0
...
...
...
aj ajāˆ’1 Ā· Ā· Ā· a0
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£ø
(5)
is LT-superregular if all of its minors, with no zeros in the diagonal,
are nonsingular.
Remark
Note that due to such a lower triangular conļ¬guration the
remaining minors are necessarily zero.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
Ī²3 + Ī² + 1 = 0 ā‡’
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£­
1
Ī² 1
Ī²3 Ī² 1
Ī² Ī²3 Ī² 1
1 Ī² Ī²3 Ī² 1
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£·
ļ£ø
āˆˆ F5Ɨ5
23 is
LT-superregular
Example
5 + 2 + 1 = 0 ā‡’
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£­
1
1
6 1
9 6 1
6 9 6 1
6 9 6 1
1 6 9 6 1
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£ø
āˆˆ F7Ɨ7
25 is
LT-superregular
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Remarks
ā€¢ Construction of classes of LT-superregular matrices is very
diļ¬ƒcult due to their triangular conļ¬guration.
ā€¢ Only two classes exist:
1. Rosenthal et al. (2006) presented the ļ¬rst construction. For
any n there exists a prime number p such that
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£­
n
0
nāˆ’1
1
n
0
...
...
...
nāˆ’1
nāˆ’1 Ā· Ā· Ā· nāˆ’1
1
n
0
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£ø
āˆˆ FnƗn
p
is LT-superregular. Bad news: Requires a ļ¬eld with very large
characteristic.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Remarks
2. Almeida, Napp and Pinto (2013) ļ¬rst construction over any
characteristic: Let Ī± be a primitive element of a ļ¬nite ļ¬eld F
of characteristic p. If |F| ā‰„ p2M
then the following matrix
ļ£®
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£°
Ī±20
Ī±21
Ī±20
Ī±22
Ī±21
Ī±20
...
...
...
Ī±2Māˆ’1
Ā· Ā· Ā· Ā· Ā· Ā· Ī±20
ļ£¹
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£»
.
is LT-superregular. Bad news: |F| very large.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Performance over the erasure channel
Theorem
Let C be an (n, k, Ī“) convolutional code and dc
j0
the j = j0 -th
column distance. If in any sliding window of length (j0 + 1)n at
most dc
j0
āˆ’ 1 erasures occur then we can recover completely the
transmitted sequence.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Performance over the erasure channel
Theorem
Let C be an (n, k, Ī“) convolutional code and dc
j0
the j = j0 -th
column distance. If in any sliding window of length (j0 + 1)n at
most dc
j0
āˆ’ 1 erasures occur then we can recover completely the
transmitted sequence.
Example
Ā· Ā· Ā· vv|
60
Ā· Ā· Ā·
80
vvv Ā· Ā· Ā· vv
60
Ā· vv|vv Ā· Ā· Ā·
A [202, 101] MDS blok code can correct 101 erasures in a window
of 202 symbols (recovering rate 101
202): ā‡’ cannot correct this
window.
A (2, 1, 50) MDP convolutional code has also 50% error capability.
(L + 1)n = 101 Ɨ 2 = 202. Take a window of 120 symbols, correct
and continue until you correct the whole window.
We have ļ¬‚exibility in choosing the size and position of the sliding
window.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Fundamental Open Problems
ā€¢ Come up with LT superregular matrices over small ļ¬elds.
ā€¢ How is the minimum ļ¬eld size needed to construct LT
superregular matrices?
ā€¢ Typically convolutional codes are decoded via the Viterbi
decoding algorithm. The complexity of this algorithm grows
exponentially with the McMillan degree. New classes of codes
coming with more eļ¬ƒcient decoding algorithms are needed.
ā€¢ Good constructions of convolutional codes for rank metric
ā€¢ Good constructions tailor made to deal with burst of erasures
(in both Hamming and Rank metric)
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
References
G.D. Forney, Jr. (1975)
ā€Minimal bases of rational vector spaces, with applications to
multivariable linear systemsā€,
SIAM J. Control 13, 493ā€“520, 1975.
G.D. Forney, Jr. (1970)
ā€Convolutional codes I: algebraic structure ā€,
IEEE Trans. Inform. Theory vol. IT-16, pp. 720-738, 1970.
R.J. McEliece (1998)
The Algebraic Theory of Convolutional Codes
Handbook of Coding Theory Vol. 1 , North-Holland,
Amsterdam.
Johannesson, Rolf and Zigangirov, K. (1998)
Fundamentals of convolutional coding
IEEE Communications society and IEEE Information theory
society and Vehicular Technology Society.
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Summary of the basics
ā€¢ Convolutional codes are block codes with memory.
Convolutional codes generalize linear block codes in a natural
way.
ā€¢ Convolutional codes are capable of decoding a large number
of errors per time interval require a large free distance and a
good distance proļ¬le.
ā€¢ In order to construct good convolutional codes we need to
construct superregular matrices over F: Diļ¬ƒcult.
ā€¢ Convolutional codes treat the information as a stream: Great
potential for video streaming
ā€¢ A lots of Mathematics are needed: Linear algebra, systems
theory, ļ¬nite ļ¬elds, rings/module theory, etc...
Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Muchas gracias por la invitacion!

More Related Content

What's hot

linear codes and cyclic codes
linear codes and cyclic codeslinear codes and cyclic codes
linear codes and cyclic codes
saigopinadh bodigiri
Ā 
Source coding
Source coding Source coding
Source coding
Shankar Gangaju
Ā 
Information theory
Information theoryInformation theory
Information theory
Madhumita Tamhane
Ā 
Error correcting codes
Error correcting codesError correcting codes
Error correcting codes
Michael Munroe
Ā 
Windowing ofdm
Windowing ofdmWindowing ofdm
Windowing ofdm
Sreeram Reddy
Ā 
Implementation of reed solomon codes basics
Implementation of reed solomon codes basicsImplementation of reed solomon codes basics
Implementation of reed solomon codes basics
Ram Singh Yadav
Ā 
Turbo codes.ppt
Turbo codes.pptTurbo codes.ppt
Turbo codes.pptPrasant Barik
Ā 
matlab code for channel estimation for ofdm
matlab code for channel estimation for ofdmmatlab code for channel estimation for ofdm
matlab code for channel estimation for ofdmGyana Ranjan Mati
Ā 
BCH Codes
BCH CodesBCH Codes
BCH Codes
AakankshaR
Ā 
Angle modulation
Angle modulationAngle modulation
Angle modulation
ggpriya me
Ā 
Phase Shift Keying & Ļ€/4 -Quadrature Phase Shift Keying
Phase Shift Keying & Ļ€/4 -Quadrature Phase Shift KeyingPhase Shift Keying & Ļ€/4 -Quadrature Phase Shift Keying
Phase Shift Keying & Ļ€/4 -Quadrature Phase Shift Keying
Naveen Jakhar, I.T.S
Ā 
Digital Communication: Information Theory
Digital Communication: Information TheoryDigital Communication: Information Theory
Digital Communication: Information Theory
Dr. Sanjay M. Gulhane
Ā 
information theory
information theoryinformation theory
information theory
Dr Naim R Kidwai
Ā 
Cyclic redundancy check
Cyclic redundancy checkCyclic redundancy check
Cyclic redundancy check
Saleh Alrkiyan
Ā 
Verilog tutorial
Verilog tutorialVerilog tutorial
Verilog tutorial
Maryala Srinivas
Ā 
Information Theory - Introduction
Information Theory  -  IntroductionInformation Theory  -  Introduction
Information Theory - Introduction
Burdwan University
Ā 
Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...
Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...
Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...
Waqas Afzal
Ā 
Reed solomon Encoder and Decoder
Reed solomon Encoder and DecoderReed solomon Encoder and Decoder
Reed solomon Encoder and Decoder
Ameer H Ali
Ā 

What's hot (20)

linear codes and cyclic codes
linear codes and cyclic codeslinear codes and cyclic codes
linear codes and cyclic codes
Ā 
Source coding
Source coding Source coding
Source coding
Ā 
Information theory
Information theoryInformation theory
Information theory
Ā 
Error correcting codes
Error correcting codesError correcting codes
Error correcting codes
Ā 
Windowing ofdm
Windowing ofdmWindowing ofdm
Windowing ofdm
Ā 
Implementation of reed solomon codes basics
Implementation of reed solomon codes basicsImplementation of reed solomon codes basics
Implementation of reed solomon codes basics
Ā 
Source coding
Source codingSource coding
Source coding
Ā 
Turbo codes.ppt
Turbo codes.pptTurbo codes.ppt
Turbo codes.ppt
Ā 
matlab code for channel estimation for ofdm
matlab code for channel estimation for ofdmmatlab code for channel estimation for ofdm
matlab code for channel estimation for ofdm
Ā 
BCH Codes
BCH CodesBCH Codes
BCH Codes
Ā 
Angle modulation
Angle modulationAngle modulation
Angle modulation
Ā 
Phase Shift Keying & Ļ€/4 -Quadrature Phase Shift Keying
Phase Shift Keying & Ļ€/4 -Quadrature Phase Shift KeyingPhase Shift Keying & Ļ€/4 -Quadrature Phase Shift Keying
Phase Shift Keying & Ļ€/4 -Quadrature Phase Shift Keying
Ā 
Digital Communication Techniques
Digital Communication TechniquesDigital Communication Techniques
Digital Communication Techniques
Ā 
Digital Communication: Information Theory
Digital Communication: Information TheoryDigital Communication: Information Theory
Digital Communication: Information Theory
Ā 
information theory
information theoryinformation theory
information theory
Ā 
Cyclic redundancy check
Cyclic redundancy checkCyclic redundancy check
Cyclic redundancy check
Ā 
Verilog tutorial
Verilog tutorialVerilog tutorial
Verilog tutorial
Ā 
Information Theory - Introduction
Information Theory  -  IntroductionInformation Theory  -  Introduction
Information Theory - Introduction
Ā 
Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...
Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...
Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...
Ā 
Reed solomon Encoder and Decoder
Reed solomon Encoder and DecoderReed solomon Encoder and Decoder
Reed solomon Encoder and Decoder
Ā 

Similar to Error-Correcting codes: Application of convolutional codes to Video Streaming

3F4ecc.ppt
3F4ecc.ppt3F4ecc.ppt
3F4ecc.ppt
Annymus
Ā 
Chapter 10: Error Correction and Detection
Chapter 10: Error Correction and DetectionChapter 10: Error Correction and Detection
Chapter 10: Error Correction and Detection
JeoffnaRuth
Ā 
New error-detection
New error-detectionNew error-detection
New error-detectionNitesh Singh
Ā 
New error-detection (2)
New error-detection (2)New error-detection (2)
New error-detection (2)Nitesh Singh
Ā 
lect5.ppt
lect5.pptlect5.ppt
lect5.ppt
DrDeepakBhatia
Ā 
Error Detection and correction concepts in Data communication and networks
Error Detection and correction concepts in Data communication and networksError Detection and correction concepts in Data communication and networks
Error Detection and correction concepts in Data communication and networks
Nt Arvind
Ā 
Y03301460154
Y03301460154Y03301460154
Y03301460154
ijceronline
Ā 
Ch3 datalink
Ch3 datalinkCh3 datalink
Ch3 datalinkRamesh Kumar
Ā 
Combinatorial Problems2
Combinatorial Problems2Combinatorial Problems2
Combinatorial Problems23ashmawy
Ā 
encoding
encodingencoding
encoding
Haripritha
Ā 
CompactDisc.ppt
CompactDisc.pptCompactDisc.ppt
CompactDisc.ppt
MaddulaCharishma
Ā 
Data links
Data links Data links
Data links
EshaAfzal5
Ā 
13-DataLink_02.ppt
13-DataLink_02.ppt13-DataLink_02.ppt
13-DataLink_02.ppt
WinterSnow16
Ā 
Defense Senior College on Error Coding presentation 4/22/2010
Defense Senior College on Error Coding presentation 4/22/2010Defense Senior College on Error Coding presentation 4/22/2010
Defense Senior College on Error Coding presentation 4/22/2010
Felicia Fort, MBA
Ā 
modeling.ppt
modeling.pptmodeling.ppt
modeling.ppt
ssuser1d6968
Ā 
Data linklayer
Data linklayerData linklayer
Data linklayersheikhshakir
Ā 
Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)
Ola Mashaqi @ an-najah national university
Ā 
CP 2011 Poster
CP 2011 PosterCP 2011 Poster
CP 2011 Poster
SAAM007
Ā 
Digital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCET
Digital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCETDigital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCET
Digital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCET
SeshaVidhyaS
Ā 

Similar to Error-Correcting codes: Application of convolutional codes to Video Streaming (20)

3F4ecc.ppt
3F4ecc.ppt3F4ecc.ppt
3F4ecc.ppt
Ā 
Chapter 10: Error Correction and Detection
Chapter 10: Error Correction and DetectionChapter 10: Error Correction and Detection
Chapter 10: Error Correction and Detection
Ā 
New error-detection
New error-detectionNew error-detection
New error-detection
Ā 
New error-detection (2)
New error-detection (2)New error-detection (2)
New error-detection (2)
Ā 
lect5.ppt
lect5.pptlect5.ppt
lect5.ppt
Ā 
Error Detection and correction concepts in Data communication and networks
Error Detection and correction concepts in Data communication and networksError Detection and correction concepts in Data communication and networks
Error Detection and correction concepts in Data communication and networks
Ā 
Y03301460154
Y03301460154Y03301460154
Y03301460154
Ā 
rs_1.pptx
rs_1.pptxrs_1.pptx
rs_1.pptx
Ā 
Ch3 datalink
Ch3 datalinkCh3 datalink
Ch3 datalink
Ā 
Combinatorial Problems2
Combinatorial Problems2Combinatorial Problems2
Combinatorial Problems2
Ā 
encoding
encodingencoding
encoding
Ā 
CompactDisc.ppt
CompactDisc.pptCompactDisc.ppt
CompactDisc.ppt
Ā 
Data links
Data links Data links
Data links
Ā 
13-DataLink_02.ppt
13-DataLink_02.ppt13-DataLink_02.ppt
13-DataLink_02.ppt
Ā 
Defense Senior College on Error Coding presentation 4/22/2010
Defense Senior College on Error Coding presentation 4/22/2010Defense Senior College on Error Coding presentation 4/22/2010
Defense Senior College on Error Coding presentation 4/22/2010
Ā 
modeling.ppt
modeling.pptmodeling.ppt
modeling.ppt
Ā 
Data linklayer
Data linklayerData linklayer
Data linklayer
Ā 
Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)
Ā 
CP 2011 Poster
CP 2011 PosterCP 2011 Poster
CP 2011 Poster
Ā 
Digital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCET
Digital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCETDigital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCET
Digital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCET
Ā 

More from Facultad de InformƔtica UCM

ĀæPor quĆ© debemos seguir trabajando en Ć”lgebra lineal?
ĀæPor quĆ© debemos seguir trabajando en Ć”lgebra lineal?ĀæPor quĆ© debemos seguir trabajando en Ć”lgebra lineal?
ĀæPor quĆ© debemos seguir trabajando en Ć”lgebra lineal?
Facultad de InformƔtica UCM
Ā 
TECNOPOLƍTICA Y ACTIVISMO DE DATOS: EL MAPEO COMO FORMA DE RESILIENCIA ANTE L...
TECNOPOLƍTICA Y ACTIVISMO DE DATOS: EL MAPEO COMO FORMA DE RESILIENCIA ANTE L...TECNOPOLƍTICA Y ACTIVISMO DE DATOS: EL MAPEO COMO FORMA DE RESILIENCIA ANTE L...
TECNOPOLƍTICA Y ACTIVISMO DE DATOS: EL MAPEO COMO FORMA DE RESILIENCIA ANTE L...
Facultad de InformƔtica UCM
Ā 
DRAC: Designing RISC-V-based Accelerators for next generation Computers
DRAC: Designing RISC-V-based Accelerators for next generation ComputersDRAC: Designing RISC-V-based Accelerators for next generation Computers
DRAC: Designing RISC-V-based Accelerators for next generation Computers
Facultad de InformƔtica UCM
Ā 
uElectronics ongoing activities at ESA
uElectronics ongoing activities at ESAuElectronics ongoing activities at ESA
uElectronics ongoing activities at ESA
Facultad de InformƔtica UCM
Ā 
Tendencias en el diseƱo de procesadores con arquitectura Arm
Tendencias en el diseƱo de procesadores con arquitectura ArmTendencias en el diseƱo de procesadores con arquitectura Arm
Tendencias en el diseƱo de procesadores con arquitectura Arm
Facultad de InformƔtica UCM
Ā 
Formalizing Mathematics in Lean
Formalizing Mathematics in LeanFormalizing Mathematics in Lean
Formalizing Mathematics in Lean
Facultad de InformƔtica UCM
Ā 
Introduction to Quantum Computing and Quantum Service Oriented Computing
Introduction to Quantum Computing and Quantum Service Oriented ComputingIntroduction to Quantum Computing and Quantum Service Oriented Computing
Introduction to Quantum Computing and Quantum Service Oriented Computing
Facultad de InformƔtica UCM
Ā 
Computer Design Concepts for Machine Learning
Computer Design Concepts for Machine LearningComputer Design Concepts for Machine Learning
Computer Design Concepts for Machine Learning
Facultad de InformƔtica UCM
Ā 
Inteligencia Artificial en la atenciĆ³n sanitaria del futuro
Inteligencia Artificial en la atenciĆ³n sanitaria del futuroInteligencia Artificial en la atenciĆ³n sanitaria del futuro
Inteligencia Artificial en la atenciĆ³n sanitaria del futuro
Facultad de InformƔtica UCM
Ā 
Design Automation Approaches for Real-Time Edge Computing for Science Applic...
 Design Automation Approaches for Real-Time Edge Computing for Science Applic... Design Automation Approaches for Real-Time Edge Computing for Science Applic...
Design Automation Approaches for Real-Time Edge Computing for Science Applic...
Facultad de InformƔtica UCM
Ā 
Estrategias de navegaciĆ³n para robĆ³tica mĆ³vil de campo: caso de estudio proye...
Estrategias de navegaciĆ³n para robĆ³tica mĆ³vil de campo: caso de estudio proye...Estrategias de navegaciĆ³n para robĆ³tica mĆ³vil de campo: caso de estudio proye...
Estrategias de navegaciĆ³n para robĆ³tica mĆ³vil de campo: caso de estudio proye...
Facultad de InformƔtica UCM
Ā 
Fault-tolerance Quantum computation and Quantum Error Correction
Fault-tolerance Quantum computation and Quantum Error CorrectionFault-tolerance Quantum computation and Quantum Error Correction
Fault-tolerance Quantum computation and Quantum Error Correction
Facultad de InformƔtica UCM
Ā 
CĆ³mo construir un chatbot inteligente sin morir en el intento
CĆ³mo construir un chatbot inteligente sin morir en el intentoCĆ³mo construir un chatbot inteligente sin morir en el intento
CĆ³mo construir un chatbot inteligente sin morir en el intento
Facultad de InformƔtica UCM
Ā 
Automatic generation of hardware memory architectures for HPC
Automatic generation of hardware memory architectures for HPCAutomatic generation of hardware memory architectures for HPC
Automatic generation of hardware memory architectures for HPC
Facultad de InformƔtica UCM
Ā 
Type and proof structures for concurrency
Type and proof structures for concurrencyType and proof structures for concurrency
Type and proof structures for concurrency
Facultad de InformƔtica UCM
Ā 
Hardware/software security contracts: Principled foundations for building sec...
Hardware/software security contracts: Principled foundations for building sec...Hardware/software security contracts: Principled foundations for building sec...
Hardware/software security contracts: Principled foundations for building sec...
Facultad de InformƔtica UCM
Ā 
Jose carlossancho slidesLa seguridad en el desarrollo de software implementad...
Jose carlossancho slidesLa seguridad en el desarrollo de software implementad...Jose carlossancho slidesLa seguridad en el desarrollo de software implementad...
Jose carlossancho slidesLa seguridad en el desarrollo de software implementad...
Facultad de InformƔtica UCM
Ā 
Do you trust your artificial intelligence system?
Do you trust your artificial intelligence system?Do you trust your artificial intelligence system?
Do you trust your artificial intelligence system?
Facultad de InformƔtica UCM
Ā 
Redes neuronales y reinforcement learning. AplicaciĆ³n en energĆ­a eĆ³lica.
Redes neuronales y reinforcement learning. AplicaciĆ³n en energĆ­a eĆ³lica.Redes neuronales y reinforcement learning. AplicaciĆ³n en energĆ­a eĆ³lica.
Redes neuronales y reinforcement learning. AplicaciĆ³n en energĆ­a eĆ³lica.
Facultad de InformƔtica UCM
Ā 
Challenges and Opportunities for AI and Data analytics in Offshore wind
Challenges and Opportunities for AI and Data analytics in Offshore windChallenges and Opportunities for AI and Data analytics in Offshore wind
Challenges and Opportunities for AI and Data analytics in Offshore wind
Facultad de InformƔtica UCM
Ā 

More from Facultad de InformƔtica UCM (20)

ĀæPor quĆ© debemos seguir trabajando en Ć”lgebra lineal?
ĀæPor quĆ© debemos seguir trabajando en Ć”lgebra lineal?ĀæPor quĆ© debemos seguir trabajando en Ć”lgebra lineal?
ĀæPor quĆ© debemos seguir trabajando en Ć”lgebra lineal?
Ā 
TECNOPOLƍTICA Y ACTIVISMO DE DATOS: EL MAPEO COMO FORMA DE RESILIENCIA ANTE L...
TECNOPOLƍTICA Y ACTIVISMO DE DATOS: EL MAPEO COMO FORMA DE RESILIENCIA ANTE L...TECNOPOLƍTICA Y ACTIVISMO DE DATOS: EL MAPEO COMO FORMA DE RESILIENCIA ANTE L...
TECNOPOLƍTICA Y ACTIVISMO DE DATOS: EL MAPEO COMO FORMA DE RESILIENCIA ANTE L...
Ā 
DRAC: Designing RISC-V-based Accelerators for next generation Computers
DRAC: Designing RISC-V-based Accelerators for next generation ComputersDRAC: Designing RISC-V-based Accelerators for next generation Computers
DRAC: Designing RISC-V-based Accelerators for next generation Computers
Ā 
uElectronics ongoing activities at ESA
uElectronics ongoing activities at ESAuElectronics ongoing activities at ESA
uElectronics ongoing activities at ESA
Ā 
Tendencias en el diseƱo de procesadores con arquitectura Arm
Tendencias en el diseƱo de procesadores con arquitectura ArmTendencias en el diseƱo de procesadores con arquitectura Arm
Tendencias en el diseƱo de procesadores con arquitectura Arm
Ā 
Formalizing Mathematics in Lean
Formalizing Mathematics in LeanFormalizing Mathematics in Lean
Formalizing Mathematics in Lean
Ā 
Introduction to Quantum Computing and Quantum Service Oriented Computing
Introduction to Quantum Computing and Quantum Service Oriented ComputingIntroduction to Quantum Computing and Quantum Service Oriented Computing
Introduction to Quantum Computing and Quantum Service Oriented Computing
Ā 
Computer Design Concepts for Machine Learning
Computer Design Concepts for Machine LearningComputer Design Concepts for Machine Learning
Computer Design Concepts for Machine Learning
Ā 
Inteligencia Artificial en la atenciĆ³n sanitaria del futuro
Inteligencia Artificial en la atenciĆ³n sanitaria del futuroInteligencia Artificial en la atenciĆ³n sanitaria del futuro
Inteligencia Artificial en la atenciĆ³n sanitaria del futuro
Ā 
Design Automation Approaches for Real-Time Edge Computing for Science Applic...
 Design Automation Approaches for Real-Time Edge Computing for Science Applic... Design Automation Approaches for Real-Time Edge Computing for Science Applic...
Design Automation Approaches for Real-Time Edge Computing for Science Applic...
Ā 
Estrategias de navegaciĆ³n para robĆ³tica mĆ³vil de campo: caso de estudio proye...
Estrategias de navegaciĆ³n para robĆ³tica mĆ³vil de campo: caso de estudio proye...Estrategias de navegaciĆ³n para robĆ³tica mĆ³vil de campo: caso de estudio proye...
Estrategias de navegaciĆ³n para robĆ³tica mĆ³vil de campo: caso de estudio proye...
Ā 
Fault-tolerance Quantum computation and Quantum Error Correction
Fault-tolerance Quantum computation and Quantum Error CorrectionFault-tolerance Quantum computation and Quantum Error Correction
Fault-tolerance Quantum computation and Quantum Error Correction
Ā 
CĆ³mo construir un chatbot inteligente sin morir en el intento
CĆ³mo construir un chatbot inteligente sin morir en el intentoCĆ³mo construir un chatbot inteligente sin morir en el intento
CĆ³mo construir un chatbot inteligente sin morir en el intento
Ā 
Automatic generation of hardware memory architectures for HPC
Automatic generation of hardware memory architectures for HPCAutomatic generation of hardware memory architectures for HPC
Automatic generation of hardware memory architectures for HPC
Ā 
Type and proof structures for concurrency
Type and proof structures for concurrencyType and proof structures for concurrency
Type and proof structures for concurrency
Ā 
Hardware/software security contracts: Principled foundations for building sec...
Hardware/software security contracts: Principled foundations for building sec...Hardware/software security contracts: Principled foundations for building sec...
Hardware/software security contracts: Principled foundations for building sec...
Ā 
Jose carlossancho slidesLa seguridad en el desarrollo de software implementad...
Jose carlossancho slidesLa seguridad en el desarrollo de software implementad...Jose carlossancho slidesLa seguridad en el desarrollo de software implementad...
Jose carlossancho slidesLa seguridad en el desarrollo de software implementad...
Ā 
Do you trust your artificial intelligence system?
Do you trust your artificial intelligence system?Do you trust your artificial intelligence system?
Do you trust your artificial intelligence system?
Ā 
Redes neuronales y reinforcement learning. AplicaciĆ³n en energĆ­a eĆ³lica.
Redes neuronales y reinforcement learning. AplicaciĆ³n en energĆ­a eĆ³lica.Redes neuronales y reinforcement learning. AplicaciĆ³n en energĆ­a eĆ³lica.
Redes neuronales y reinforcement learning. AplicaciĆ³n en energĆ­a eĆ³lica.
Ā 
Challenges and Opportunities for AI and Data analytics in Offshore wind
Challenges and Opportunities for AI and Data analytics in Offshore windChallenges and Opportunities for AI and Data analytics in Offshore wind
Challenges and Opportunities for AI and Data analytics in Offshore wind
Ā 

Recently uploaded

Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
Ā 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
Ā 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
Ā 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
Ā 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
Ā 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
Ā 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
Ā 
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofszkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
Alex Pruden
Ā 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
Ā 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
Ā 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
Ā 
UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..
UiPathCommunity
Ā 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
Ā 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
Ā 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
Ā 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
Vlad Stirbu
Ā 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
Ā 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
Ā 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
Ā 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
Ā 

Recently uploaded (20)

Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
Ā 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Ā 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Ā 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
Ā 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Ā 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Ā 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Ā 
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofszkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
Ā 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Ā 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Ā 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Ā 
UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..
Ā 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ā 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Ā 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
Ā 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
Ā 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
Ā 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Ā 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
Ā 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Ā 

Error-Correcting codes: Application of convolutional codes to Video Streaming

  • 1. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Error-Correcting codes: Application of convolutional codes to Video Streaming Diego Napp Department of Mathematics, Universidad of Aveiro, Portugal July 22, 2016
  • 2. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Overview Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
  • 3. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Error Correcting Codes Basic Problem: ā€¢ want to store bits on magnetic storage device ā€¢ or send a message (sequence of zeros/ones)
  • 4. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Error Correcting Codes Basic Problem: ā€¢ want to store bits on magnetic storage device ā€¢ or send a message (sequence of zeros/ones) ā€¢ Bits get corrupted, 0 ā†’ 1 or 1 ā†’ 0, but rarely.
  • 5. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Error Correcting Codes Basic Problem: ā€¢ want to store bits on magnetic storage device ā€¢ or send a message (sequence of zeros/ones) ā€¢ Bits get corrupted, 0 ā†’ 1 or 1 ā†’ 0, but rarely. What happens when we store/send information and errors occur?
  • 6. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Error Correcting Codes Basic Problem: ā€¢ want to store bits on magnetic storage device ā€¢ or send a message (sequence of zeros/ones) ā€¢ Bits get corrupted, 0 ā†’ 1 or 1 ā†’ 0, but rarely. What happens when we store/send information and errors occur? can we detect them? correct?
  • 7. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes The International Standard Book Number (ISBN) It can be proved that all possible valid ISBN-10ā€™s have at least two digits diļ¬€erent from each other. ISBN-10: x1 āˆ’ x2x3x4 āˆ’ x5x6x7x8x9 āˆ’ x10 satisfy 10 i=1 ixi = 0 mod 11 For example, for an ISBN-10 of 0-306-40615-2: s = (0 Ɨ 10) + (3 Ɨ 9) + (0 Ɨ 8) + (6 Ɨ 7)+ + (4 Ɨ 6) + (0 Ɨ 5) + (6 Ɨ 4) + (1 Ɨ 3) + (5 Ɨ 2) + (2 Ɨ 1) = 0 + 27 + 0 + 42 + 24 + 0 + 24 + 3 + 10 + 2 = 132 = 12 Ɨ 11
  • 8. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Sending/storing information: Naive solution Repeat every bit three times Message 1 1 0 1 Ā· Ā· Ā· Encoding =ā‡’ Codeword 111 111 000 111 Ā· Ā· Ā· Received message 111 111 001 111 Ā· Ā· Ā· Decoding =ā‡’ 1 1 0 1 Ā· Ā· Ā·
  • 9. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Sending/storing information: Naive solution Repeat every bit three times Message 1 1 0 1 Ā· Ā· Ā· Encoding =ā‡’ Codeword 111 111 000 111 Ā· Ā· Ā· Received message 111 111 001 111 Ā· Ā· Ā· Decoding =ā‡’ 1 1 0 1 Ā· Ā· Ā· ā€¢ Good: Very easy Encoding / decoding ā€¢ Bad: Rate 1/3
  • 10. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Sending/storing information: Naive solution Repeat every bit three times Message 1 1 0 1 Ā· Ā· Ā· Encoding =ā‡’ Codeword 111 111 000 111 Ā· Ā· Ā· Received message 111 111 001 111 Ā· Ā· Ā· Decoding =ā‡’ 1 1 0 1 Ā· Ā· Ā· ā€¢ Good: Very easy Encoding / decoding ā€¢ Bad: Rate 1/3 Can we do better???
  • 11. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Another Solution ā€¢ Break the message into 2 bits blocks m = u1 u2 āˆˆ F2 ā€¢ Encode each block as follows: u āˆ’ā†’ uG where G = 1 0 1 0 1 1 ; (u1, u2) 1 0 1 0 1 1 = u1 u2 u1 + u2
  • 12. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Another Solution ā€¢ Break the message into 2 bits blocks m = u1 u2 āˆˆ F2 ā€¢ Encode each block as follows: u āˆ’ā†’ uG where G = 1 0 1 0 1 1 ; (u1, u2) 1 0 1 0 1 1 = u1 u2 u1 + u2 ā€¢ Better Rate 2/3 ā€¢ I can detect 1 error...but I cannot correct.
  • 13. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes ā€¢ Break the message into 3 bits blocks m = 1 1 0 āˆˆ F3 ā€¢ Encode each block as follows: u āˆ’ā†’ uG G = ļ£« ļ£­ 1 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 ļ£¶ ļ£ø ;
  • 14. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes ā€¢ Break the message into 3 bits blocks m = 1 1 0 āˆˆ F3 ā€¢ Encode each block as follows: u āˆ’ā†’ uG G = ļ£« ļ£­ 1 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 ļ£¶ ļ£ø ; For example (1, 1, 0) ļ£« ļ£­ 1 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 ļ£¶ ļ£ø = (1, 1, 0, 0, 1, 1); (1, 0, 1) ļ£« ļ£­ 1 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 ļ£¶ ļ£ø = (1, 0, 1, 1, 0, 1); etc...
  • 15. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes ā€¢ Rate 3/6 = 1/2, better than before (1/3). ā€¢ Only 23 codewords in F6 C = {(1, 0, 0, 1, 1, 0), (0, 1, 0, 1, 0, 1), (0, 0, 1, 0, 1, 1), (1, 1, 0, 0, 1, 1), (1, 0, 1, 1, 0, 1), (0, 1, 1, 1, 1, 0), (1, 1, 1, 0, 0, 0), (0, 0, 0, 0, 0, 0)} ā€¢ In F6 we have 26 possible vectors
  • 16. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes ā€¢ Rate 3/6 = 1/2, better than before (1/3). ā€¢ Only 23 codewords in F6 C = {(1, 0, 0, 1, 1, 0), (0, 1, 0, 1, 0, 1), (0, 0, 1, 0, 1, 1), (1, 1, 0, 0, 1, 1), (1, 0, 1, 1, 0, 1), (0, 1, 1, 1, 1, 0), (1, 1, 1, 0, 0, 0), (0, 0, 0, 0, 0, 0)} ā€¢ In F6 we have 26 possible vectors ā€¢ Any two codewords diļ¬€er at least in 3 coordinates. I can detect and correct 1 error!!!
  • 17. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Hamming distance The intuitive concept of ā€œclosenessā€ of two words is well formalized through Hamming distance h(x, y) of words x, y. For two words x, y h(x, y) = the number of symbols x and y diļ¬€er. A code C is a subset of Fn, F a ļ¬nite ļ¬eld. An important parameter of C is its minimal distance. dist(C) = min{h(x, y) | x, y āˆˆ C, x = y},
  • 18. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Hamming distance The intuitive concept of ā€œclosenessā€ of two words is well formalized through Hamming distance h(x, y) of words x, y. For two words x, y h(x, y) = the number of symbols x and y diļ¬€er. A code C is a subset of Fn, F a ļ¬nite ļ¬eld. An important parameter of C is its minimal distance. dist(C) = min{h(x, y) | x, y āˆˆ C, x = y}, Theorem (Basic error correcting theorem) 1. A code C can detected up to s errors if dist(C) ā‰„ s + 1. 2. A code C can correct up to t errors if dist(C) ā‰„ 2t + 1.
  • 19. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Deļ¬nition An (n, k) block code C is a k-dimensional subspace of Fn and the rows of G form a basis of C C = ImFG = uG : u āˆˆ Fk (1)
  • 20. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Deļ¬nition An (n, k) block code C is a k-dimensional subspace of Fn and the rows of G form a basis of C C = ImFG = uG : u āˆˆ Fk (1) Main coding theory problem 1. Construct codes that can correct a maximal number of errors while using a minimal amount of redundancy (rate) 2. Construct codes (as above) with eļ¬ƒcient encoding and decoding procedures
  • 21. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes ā€¢ Coding theory develops methods to protect information against errors. ā€¢ Cryptography develops methods how to protect information against an enemy (or an unauthorized user). ā€¢ Coding theory - theory of error correcting codes - is one of the most interesting and applied part of mathematics and informatics. ā€¢ All real systems that work with digitally represented data, as CD players, TV, fax machines, internet, satelites, mobiles, require to use error correcting codes because all real channels are, to some extent, noisy. ā€¢ Coding theory methods are often elegant applications of very basic concepts and methods of (abstract) algebra.
  • 22. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Topics we are investigating ā€¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
  • 23. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Topics we are investigating ā€¢ (Convolutional) Codes over ļ¬nite rings (Zpr ). ā€¢ Application of convolutional codes to Distributed Storage Systems.
  • 24. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Topics we are investigating ā€¢ (Convolutional) Codes over ļ¬nite rings (Zpr ). ā€¢ Application of convolutional codes to Distributed Storage Systems. ā€¢ Application of convolutional codes Network Coding, in particular to Video Streaming.
  • 25. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Distributed storage systems ā€¢ Fast-growing demand for large-scale data storage.
  • 26. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Distributed storage systems ā€¢ Fast-growing demand for large-scale data storage. ā€¢ Failures occur: Redundancy is needed to ensure resilience ā‡’ Coding theory.
  • 27. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Distributed storage systems ā€¢ Fast-growing demand for large-scale data storage. ā€¢ Failures occur: Redundancy is needed to ensure resilience ā‡’ Coding theory. ā€¢ Data is stored over a network of nodes: Peer-to-peer or data centers ā‡’ Distributed storage systems.
  • 28. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Coding Theory ā€¢ A ļ¬le u = (u1, . . . , uk) āˆˆ Fk q is redundantly stored across n nodes v = (v1, . . . , vn) = uG, where G is the generator matrix of an (n, k)-code C.
  • 29. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes ā€¢ What happens when nodes fail? The repair problem.
  • 30. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes ā€¢ What happens when nodes fail? The repair problem. ā€¢ Metrics: ā€¢ Repair bandwidth ā€¢ Storage cost ā€¢ Locality
  • 31. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Locality ā€¢ The locality is the number of nodes necessary to repair one node that fails. Deļ¬nition An (n, k) code has locality r if every codeword symbol in a codeword is a linear combination of at most r other symbols in the codeword.
  • 32. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Locality ā€¢ The locality is the number of nodes necessary to repair one node that fails. Deļ¬nition An (n, k) code has locality r if every codeword symbol in a codeword is a linear combination of at most r other symbols in the codeword. ā€¢ Locality ā‰„ 2 (if the code is not a replication). ā€¢ There is a natural trade-oļ¬€ between distance and locality.
  • 33. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Locality ā€¢ The locality is the number of nodes necessary to repair one node that fails. Deļ¬nition An (n, k) code has locality r if every codeword symbol in a codeword is a linear combination of at most r other symbols in the codeword. ā€¢ Locality ā‰„ 2 (if the code is not a replication). ā€¢ There is a natural trade-oļ¬€ between distance and locality.Theorem (Gopalan et. al,2012) Let C be an (n, k) linear code with minimum distance d and locality r. Then n āˆ’ k + 1 āˆ’ d ā‰„ k āˆ’ 1 r .
  • 34. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Pyramid Codes are optimal with respect to this bound ā€¢ Pyramid codes were Implemented in Facebook and Windows Azure Storage (release in 2007). ā€¢ But if two erasures occur .... how to repair multiple erasures?
  • 35. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Pyramid Codes are optimal with respect to this bound ā€¢ Pyramid codes were Implemented in Facebook and Windows Azure Storage (release in 2007). ā€¢ But if two erasures occur .... how to repair multiple erasures? ā€¢ ....Next time... Today we focus in Video Streaming.
  • 36. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Video Streaming ā€¢ Explosive growth of multimedia traļ¬ƒc in general and in video in particular.
  • 37. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Video Streaming ā€¢ Explosive growth of multimedia traļ¬ƒc in general and in video in particular. ā€¢ Video already accounts for over 50 % of the internet traļ¬ƒc today and mobile video traļ¬ƒc is expected to grow by a factor of more than 20 in the next ļ¬ve years [1]. [1] Cisco: Forecast and Methodology 2012-2017.
  • 38. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Video Streaming ā€¢ Strong demand for implementing highly eļ¬ƒcient approaches for video transmission.
  • 39. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Network Coding How is the best way to disseminate information over a network?
  • 40. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Network Coding How is the best way to disseminate information over a network? Linear random network coding It has been proven that linear coding is enough to achieve the upper bound in multicast problems with one or more sources. It optimizes the throughput.
  • 41. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Linear Network Coding ā€¢ During one shot the transmitter injects a number of packets into the network, each of which may be regarded as a row vector over a ļ¬nite ļ¬eld Fqm .
  • 42. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Linear Network Coding ā€¢ During one shot the transmitter injects a number of packets into the network, each of which may be regarded as a row vector over a ļ¬nite ļ¬eld Fqm . ā€¢ These packets propagate through the network. Each node creates a random -linear combination of the packets it has available and transmits this random combination.
  • 43. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Linear Network Coding ā€¢ During one shot the transmitter injects a number of packets into the network, each of which may be regarded as a row vector over a ļ¬nite ļ¬eld Fqm . ā€¢ These packets propagate through the network. Each node creates a random -linear combination of the packets it has available and transmits this random combination. ā€¢ Finally, the receiver collects such randomly generated packets and tries to infer the set of packets injected into the network
  • 44. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Rank metric codes are used in Network Coding ā€¢ Rank metric codes are matrix codes C āŠ‚ FmƗn q , armed with the rank distance drank(X, Y ) = rank(X āˆ’ Y ), where X, Y āˆˆ FmƗn q .
  • 45. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Rank metric codes are used in Network Coding ā€¢ Rank metric codes are matrix codes C āŠ‚ FmƗn q , armed with the rank distance drank(X, Y ) = rank(X āˆ’ Y ), where X, Y āˆˆ FmƗn q . ā€¢ For linear (n, k) rank metric codes over Fqm with m ā‰„ n the following analog of the Singleton bound holds, drank(C) ā‰¤ n āˆ’ k + 1.
  • 46. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Rank metric codes are used in Network Coding ā€¢ Rank metric codes are matrix codes C āŠ‚ FmƗn q , armed with the rank distance drank(X, Y ) = rank(X āˆ’ Y ), where X, Y āˆˆ FmƗn q . ā€¢ For linear (n, k) rank metric codes over Fqm with m ā‰„ n the following analog of the Singleton bound holds, drank(C) ā‰¤ n āˆ’ k + 1. ā€¢ The code that achieves this bound is called Maximum Rank Distance (MRD). Gabidulin codes are a well-known class of MRD codes.
  • 47. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes THE IDEA: Multi-shot ā€¢ Coding can also be performed over multiple uses of the network, whose internal structure may change at each shot
  • 48. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes THE IDEA: Multi-shot ā€¢ Coding can also be performed over multiple uses of the network, whose internal structure may change at each shot ā€¢ Creating dependencies among the transmitted codewords of diļ¬€erent shots can improve the error-correction capabilities
  • 49. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes THE IDEA: Multi-shot ā€¢ Coding can also be performed over multiple uses of the network, whose internal structure may change at each shot ā€¢ Creating dependencies among the transmitted codewords of diļ¬€erent shots can improve the error-correction capabilities ā€¢ Ideal coding techniques for video streaming must operate under low-latency, sequential encoding and decoding constrains, and as such they must inherently have a convolutional structure.
  • 50. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes THE IDEA: Multi-shot ā€¢ Coding can also be performed over multiple uses of the network, whose internal structure may change at each shot ā€¢ Creating dependencies among the transmitted codewords of diļ¬€erent shots can improve the error-correction capabilities ā€¢ Ideal coding techniques for video streaming must operate under low-latency, sequential encoding and decoding constrains, and as such they must inherently have a convolutional structure. ā€¢ Although the use of convolutional codes is widespread, its application to video streaming (or using the rank metric) is yet unexplored.
  • 51. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes THE IDEA: Multi-shot ā€¢ Coding can also be performed over multiple uses of the network, whose internal structure may change at each shot ā€¢ Creating dependencies among the transmitted codewords of diļ¬€erent shots can improve the error-correction capabilities ā€¢ Ideal coding techniques for video streaming must operate under low-latency, sequential encoding and decoding constrains, and as such they must inherently have a convolutional structure. ā€¢ Although the use of convolutional codes is widespread, its application to video streaming (or using the rank metric) is yet unexplored. ā€¢ We propose a novel scheme that add complex dependencies to data streams in a quite simple way
  • 52. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Block codes vs convolutional codes . . . u2, u1, u0 G āˆ’āˆ’āˆ’āˆ’ā†’ . . . v2 = u2G, v1 = u1G, v0 = u0G represented in a polynomial fashion Ā· Ā· Ā· + u2D2 + u1D + u0 G āˆ’āˆ’āˆ’āˆ’ā†’ Ā· Ā· Ā· + u2G v2 D2 + u1G v1 D + u0G v0 substitute G by G(D) = G0 + G1D + Ā· Ā· Ā· + GsDs? ...u2D2 + u1D + u0 G(D) āˆ’āˆ’ā†’...(u2G0 + u1G1 + u0G2) v2 D2 +(u1G0 + u0G1) v1 D +u0G0 v0
  • 53. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Block codes vs convolutional codes . . . u2, u1, u0 G āˆ’āˆ’āˆ’āˆ’ā†’ . . . v2 = u2G, v1 = u1G, v0 = u0G represented in a polynomial fashion Ā· Ā· Ā· + u2D2 + u1D + u0 G āˆ’āˆ’āˆ’āˆ’ā†’ Ā· Ā· Ā· + u2G v2 D2 + u1G v1 D + u0G v0 substitute G by G(D) = G0 + G1D + Ā· Ā· Ā· + GsDs? ...u2D2 + u1D + u0 G(D) āˆ’āˆ’ā†’...(u2G0 + u1G1 + u0G2) v2 D2 +(u1G0 + u0G1) v1 D +u0G0 v0 Block codes: C = {uG} = ImFG āˆ¼ {u(D)G} = ImFG(D) Convolutional codes: C = {u(D)G(D)} = ImF((D))G(D)
  • 54. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Deļ¬nition A convolutional code C is a F((D))-subspace of Fn((D)).
  • 55. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Deļ¬nition A convolutional code C is a F((D))-subspace of Fn((D)). A matrix G(D) whose rows form a bases for C is called an encoder. If C has rank k then we say the C has rate k/n. C = ImF((D))G(D) = u(D)G(D) : u(D) āˆˆ Fk ((D)) = KerF[D]H(D) = {v(D) āˆˆ Fn [D] : v(D)H(D) = 0} where H(D) is called the parity-check of C.
  • 56. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Deļ¬nition A convolutional code C is a F((D))-subspace of Fn((D)). A matrix G(D) whose rows form a bases for C is called an encoder. If C has rank k then we say the C has rate k/n. C = ImF((D))G(D) = u(D)G(D) : u(D) āˆˆ Fk ((D)) = KerF[D]H(D) = {v(D) āˆˆ Fn [D] : v(D)H(D) = 0} where H(D) is called the parity-check of C. Remark One can also consider the ring of polynomials F[D] instead of Laurent series F((D)) and deļ¬ne C as a F[D]-module of Fn[D].
  • 57. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes A convolutional encoder is also a linear device which maps u(0), u(1), Ā· Ā· Ā· āˆ’ā†’ v(0), v(1), . . . In this sense it is the same as block encoders. The diļ¬€erence is that the convolutional encoder has an internal ā€œstorage vectorā€ or ā€œmemoryā€. v(i) does not depend only on u(i) but also on the storage vector x(i) x(i + 1) = Ax(i) + Bu(i) v(i) = Cx(i) + Eu(i) (2) A āˆˆ FĪ“ƗĪ“, B āˆˆ FĪ“Ɨk, C āˆˆ FnƗĪ“, E āˆˆ FĪ“Ɨk.
  • 58. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes The encoder G(D) = D2 + 1 D2 + D + 1 has the following implementation āŠ• // // ...0, 1, 1 // 0 // ;; ## 0 // 0 bb || āŠ• // //
  • 59. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes The encoder G(D) = D2 + 1 D2 + D + 1 has the following implementation āŠ• // // 1 ...0, 1 // 1 // 0 // 0 bb || āŠ• // // 1
  • 60. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes The encoder G(D) = D2 + 1 D2 + D + 1 has the following implementation āŠ• // // 1, 1 ...0 // 1 // 1 // 0 bb || āŠ• // // 0, 1
  • 61. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes The encoder G(D) = D2 + 1 D2 + D + 1 has the following implementation āŠ• // // 1, 1, 1 // 0 // 1 // 1 bb || āŠ• // // 0, 0, 1
  • 62. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Example Let the convolutional code be given by matrices x1(i + 1) x2(i + 1) = 0 1 0 0 x1(i) x2(i) + 1 0 u(i) v1(i) v2(i) = 1 0 1 1 x1(i) x2(i) + 1 1 u(i) We can compute an encoder G(D) = E + B(Dāˆ’1 Im āˆ’ A)āˆ’1 C = 1 + D + D2 1 + D2
  • 63. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Example A physical realization for the encoder G(D) = 1 + D + D2 1 + D2 . This encoder has degree 2 and memory 2. Clearly any matrix which is F(D)-equivalent to G(D) is also an encoder. G (D) = 1 1+D2 1+D+D2
  • 64. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Example A physical realization for the generator matrix G (D). This encoder has degree 2 and inļ¬nite memory.
  • 65. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Example A physical realization for the catastrophic encoder G (D) = 1 + D3 1 + D + D2 + D3 . This encoder has degree 3 and memory 3.
  • 66. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Polynomial encoders Two encoders G(D), G (D) generate the same code if there exist an invertible matrix U(D) such that G(D) = U(D)G (D).
  • 67. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Polynomial encoders Two encoders G(D), G (D) generate the same code if there exist an invertible matrix U(D) such that G(D) = U(D)G (D). Deļ¬nition A generator matrix G(D) is said to be catastrophic if for every v(D) = u(D)G(D) supp(v(D)) is ļ¬nite ā‡’ supp(u(D)) is ļ¬nite
  • 68. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Polynomial encoders Two encoders G(D), G (D) generate the same code if there exist an invertible matrix U(D) such that G(D) = U(D)G (D). Deļ¬nition A generator matrix G(D) is said to be catastrophic if for every v(D) = u(D)G(D) supp(v(D)) is ļ¬nite ā‡’ supp(u(D)) is ļ¬nite Theorem G(D) is non-catastrophic if G(D) admits a polynomial right inverse.
  • 69. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Example The encoder G(D) = 1 + D 0 1 D 1 D 1 + D 0 is noncatastrophic as an inverse is H(D) = ļ£« ļ£¬ ļ£¬ ļ£­ 0 1 0 0 0 0 Dāˆ’1 1 + Dāˆ’1 ļ£¶ ļ£· ļ£· ļ£ø
  • 70. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Historical Remarks ā€¢ Convolutional codes were introduced by Elias (1955) ā€¢ The theory was imperfectly understood until a series of papers of Forney in the 70ā€™s on the algebra of the k Ɨ n matrices over the ļ¬eld of rational functions in the delay operator D. ā€¢ Became widespread in practice with the Viterbi decoding. They belong to the most widely implemented codes in (wireless) communications. ā€¢ The ļ¬eld is typically F2 but in the last decade a renewed interest has grown for convolutional codes over large ļ¬elds trying to fully exploit the potential of convolutional codes.
  • 71. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes In Applications ā€¢ In block coding it is normally considered n and k large. ā€¢ Convolutional codes are typically studied for n and k small and ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
  • 72. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes In Applications ā€¢ In block coding it is normally considered n and k large. ā€¢ Convolutional codes are typically studied for n and k small and ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“. ā€¢ Decoding over the symmetric channel is, in general, diļ¬ƒcult.
  • 73. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes In Applications ā€¢ In block coding it is normally considered n and k large. ā€¢ Convolutional codes are typically studied for n and k small and ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“. ā€¢ Decoding over the symmetric channel is, in general, diļ¬ƒcult. ā€¢ The ļ¬eld is typically F2. The degree cannot be too large so that the Viterbi decoding algorithm is eļ¬ƒcient.
  • 74. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes In Applications ā€¢ In block coding it is normally considered n and k large. ā€¢ Convolutional codes are typically studied for n and k small and ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“. ā€¢ Decoding over the symmetric channel is, in general, diļ¬ƒcult. ā€¢ The ļ¬eld is typically F2. The degree cannot be too large so that the Viterbi decoding algorithm is eļ¬ƒcient. ā€¢ Convolutional codes over large alphabets have attracted much attention in recent years.
  • 75. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes In Applications ā€¢ In block coding it is normally considered n and k large. ā€¢ Convolutional codes are typically studied for n and k small and ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“. ā€¢ Decoding over the symmetric channel is, in general, diļ¬ƒcult. ā€¢ The ļ¬eld is typically F2. The degree cannot be too large so that the Viterbi decoding algorithm is eļ¬ƒcient. ā€¢ Convolutional codes over large alphabets have attracted much attention in recent years. ā€¢ In [Tomas, Rosenthal, Smarandache 2012]: Decoding over the erasure channel is easy and Viterbi is not needed, just linear algebra.
  • 76. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes MDS convolutional codes over F The Hamming weight of a polynomial vector v(D) = iāˆˆN vi Di = v0 + v1D + v2D2 + Ā· Ā· Ā· + vĪ½DĪ½ āˆˆ F[D]n , deļ¬ned as wt(v(D)) = Ī½ i=0 wt(vi ).
  • 77. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes MDS convolutional codes over F The Hamming weight of a polynomial vector v(D) = iāˆˆN vi Di = v0 + v1D + v2D2 + Ā· Ā· Ā· + vĪ½DĪ½ āˆˆ F[D]n , deļ¬ned as wt(v(D)) = Ī½ i=0 wt(vi ). The free distance of a convolutional code C is given by, dfree(C) = min {wt(v(D)) | v(D) āˆˆ C and v(D) = 0}
  • 78. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes MDS convolutional codes over F The Hamming weight of a polynomial vector v(D) = iāˆˆN vi Di = v0 + v1D + v2D2 + Ā· Ā· Ā· + vĪ½DĪ½ āˆˆ F[D]n , deļ¬ned as wt(v(D)) = Ī½ i=0 wt(vi ). The free distance of a convolutional code C is given by, dfree(C) = min {wt(v(D)) | v(D) āˆˆ C and v(D) = 0} ā€¢ We are interested in the maximum possible value of dfree(C)
  • 79. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes MDS convolutional codes over F The Hamming weight of a polynomial vector v(D) = iāˆˆN vi Di = v0 + v1D + v2D2 + Ā· Ā· Ā· + vĪ½DĪ½ āˆˆ F[D]n , deļ¬ned as wt(v(D)) = Ī½ i=0 wt(vi ). The free distance of a convolutional code C is given by, dfree(C) = min {wt(v(D)) | v(D) āˆˆ C and v(D) = 0} ā€¢ We are interested in the maximum possible value of dfree(C) ā€¢ For block codes (Ī“ = 0) we know that maximum value is given by the Singleton bound: n āˆ’ k + 1
  • 80. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes MDS convolutional codes over F The Hamming weight of a polynomial vector v(D) = iāˆˆN vi Di = v0 + v1D + v2D2 + Ā· Ā· Ā· + vĪ½DĪ½ āˆˆ F[D]n , deļ¬ned as wt(v(D)) = Ī½ i=0 wt(vi ). The free distance of a convolutional code C is given by, dfree(C) = min {wt(v(D)) | v(D) āˆˆ C and v(D) = 0} ā€¢ We are interested in the maximum possible value of dfree(C) ā€¢ For block codes (Ī“ = 0) we know that maximum value is given by the Singleton bound: n āˆ’ k + 1 ā€¢ This bound can be achieve if |F| n
  • 81. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Theorem Rosenthal and Smarandache (1999) showed that the free distance of convolutional code of rate k/n and degree Ī“ must be upper bounded by dfree(C) ā‰¤ (n āˆ’ k) Ī“ k + 1 + Ī“ + 1. (3) A code achieving (3) is called Maximum Distance Separable (MDS) and if it achieves it ā€as fast as possibleā€ is called strongly MDS.
  • 82. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Theorem Rosenthal and Smarandache (1999) showed that the free distance of convolutional code of rate k/n and degree Ī“ must be upper bounded by dfree(C) ā‰¤ (n āˆ’ k) Ī“ k + 1 + Ī“ + 1. (3) A code achieving (3) is called Maximum Distance Separable (MDS) and if it achieves it ā€as fast as possibleā€ is called strongly MDS. ā€¢ Allen conjecture (1999) the existence of convolutional codes that are both sMDS and MDP when k = 1 and n = 2. ā€¢ Rosenthal and Smarandache (2001), provided the ļ¬rst concrete construction of MDS convolutional codes ā€¢ Gluessing-Luerssen, et. al. (2006), provided the ļ¬rst construction of strongly MDS when (n āˆ’ k)|Ī“. ā€¢ Napp and Smarandache (2016) provided the ļ¬rst construction of strongly MDS for all rates and degrees.
  • 83. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Deļ¬nition Another important distance measure for a convolutional code is the jth column distance dc j (C), (introduced by Costello), given by dc j (C) = min wt(v[0,j](D)) | v(D) āˆˆ C and v0 = 0 where v[0,j](D) = v0 + v1D + . . . + vj Dj represents the j-th truncation of the codeword v(D) āˆˆ C.
  • 84. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes The column distances satisfy dc 0 ā‰¤ dc 1 ā‰¤ Ā· Ā· Ā· ā‰¤ lim jā†’āˆž dc j (C) = dfree(C) ā‰¤ (n āˆ’ k) Ī“ k + 1 + Ī“ + 1. The j-th column distance is upper bounded as following dc j (C) ā‰¤ (n āˆ’ k)(j + 1) + 1, (4)
  • 85. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes The column distances satisfy dc 0 ā‰¤ dc 1 ā‰¤ Ā· Ā· Ā· ā‰¤ lim jā†’āˆž dc j (C) = dfree(C) ā‰¤ (n āˆ’ k) Ī“ k + 1 + Ī“ + 1. The j-th column distance is upper bounded as following dc j (C) ā‰¤ (n āˆ’ k)(j + 1) + 1, (4)
  • 86. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes The column distances satisfy dc 0 ā‰¤ dc 1 ā‰¤ Ā· Ā· Ā· ā‰¤ lim jā†’āˆž dc j (C) = dfree(C) ā‰¤ (n āˆ’ k) Ī“ k + 1 + Ī“ + 1. The j-th column distance is upper bounded as following dc j (C) ā‰¤ (n āˆ’ k)(j + 1) + 1, (4) How do we construct MDP? The construction of MDP convolutional boils down to the construction of Superregular matrices
  • 87. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes LT-Superregular matrices Deļ¬nition [Gluesing-Luerssen,Rosenthal,Smadandache (2006)] A lower triangular matrix B = ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£­ a0 a1 a0 ... ... ... aj ajāˆ’1 Ā· Ā· Ā· a0 ļ£¶ ļ£· ļ£· ļ£· ļ£ø (5) is LT-superregular if all of its minors, with no zeros in the diagonal, are nonsingular. Remark Note that due to such a lower triangular conļ¬guration the remaining minors are necessarily zero.
  • 88. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Example Ī²3 + Ī² + 1 = 0 ā‡’ ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£­ 1 Ī² 1 Ī²3 Ī² 1 Ī² Ī²3 Ī² 1 1 Ī² Ī²3 Ī² 1 ļ£¶ ļ£· ļ£· ļ£· ļ£· ļ£ø āˆˆ F5Ɨ5 23 is LT-superregular Example 5 + 2 + 1 = 0 ā‡’ ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£­ 1 1 6 1 9 6 1 6 9 6 1 6 9 6 1 1 6 9 6 1 ļ£¶ ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£ø āˆˆ F7Ɨ7 25 is LT-superregular
  • 89. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Remarks ā€¢ Construction of classes of LT-superregular matrices is very diļ¬ƒcult due to their triangular conļ¬guration. ā€¢ Only two classes exist: 1. Rosenthal et al. (2006) presented the ļ¬rst construction. For any n there exists a prime number p such that ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£­ n 0 nāˆ’1 1 n 0 ... ... ... nāˆ’1 nāˆ’1 Ā· Ā· Ā· nāˆ’1 1 n 0 ļ£¶ ļ£· ļ£· ļ£· ļ£ø āˆˆ FnƗn p is LT-superregular. Bad news: Requires a ļ¬eld with very large characteristic.
  • 90. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Remarks 2. Almeida, Napp and Pinto (2013) ļ¬rst construction over any characteristic: Let Ī± be a primitive element of a ļ¬nite ļ¬eld F of characteristic p. If |F| ā‰„ p2M then the following matrix ļ£® ļ£Æ ļ£Æ ļ£Æ ļ£Æ ļ£Æ ļ£Æ ļ£° Ī±20 Ī±21 Ī±20 Ī±22 Ī±21 Ī±20 ... ... ... Ī±2Māˆ’1 Ā· Ā· Ā· Ā· Ā· Ā· Ī±20 ļ£¹ ļ£ŗ ļ£ŗ ļ£ŗ ļ£ŗ ļ£ŗ ļ£ŗ ļ£» . is LT-superregular. Bad news: |F| very large.
  • 91. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Performance over the erasure channel Theorem Let C be an (n, k, Ī“) convolutional code and dc j0 the j = j0 -th column distance. If in any sliding window of length (j0 + 1)n at most dc j0 āˆ’ 1 erasures occur then we can recover completely the transmitted sequence.
  • 92. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Performance over the erasure channel Theorem Let C be an (n, k, Ī“) convolutional code and dc j0 the j = j0 -th column distance. If in any sliding window of length (j0 + 1)n at most dc j0 āˆ’ 1 erasures occur then we can recover completely the transmitted sequence. Example Ā· Ā· Ā· vv| 60 Ā· Ā· Ā· 80 vvv Ā· Ā· Ā· vv 60 Ā· vv|vv Ā· Ā· Ā· A [202, 101] MDS blok code can correct 101 erasures in a window of 202 symbols (recovering rate 101 202): ā‡’ cannot correct this window. A (2, 1, 50) MDP convolutional code has also 50% error capability. (L + 1)n = 101 Ɨ 2 = 202. Take a window of 120 symbols, correct and continue until you correct the whole window. We have ļ¬‚exibility in choosing the size and position of the sliding window.
  • 93. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Fundamental Open Problems ā€¢ Come up with LT superregular matrices over small ļ¬elds. ā€¢ How is the minimum ļ¬eld size needed to construct LT superregular matrices? ā€¢ Typically convolutional codes are decoded via the Viterbi decoding algorithm. The complexity of this algorithm grows exponentially with the McMillan degree. New classes of codes coming with more eļ¬ƒcient decoding algorithms are needed. ā€¢ Good constructions of convolutional codes for rank metric ā€¢ Good constructions tailor made to deal with burst of erasures (in both Hamming and Rank metric)
  • 94. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes References G.D. Forney, Jr. (1975) ā€Minimal bases of rational vector spaces, with applications to multivariable linear systemsā€, SIAM J. Control 13, 493ā€“520, 1975. G.D. Forney, Jr. (1970) ā€Convolutional codes I: algebraic structure ā€, IEEE Trans. Inform. Theory vol. IT-16, pp. 720-738, 1970. R.J. McEliece (1998) The Algebraic Theory of Convolutional Codes Handbook of Coding Theory Vol. 1 , North-Holland, Amsterdam. Johannesson, Rolf and Zigangirov, K. (1998) Fundamentals of convolutional coding IEEE Communications society and IEEE Information theory society and Vehicular Technology Society.
  • 95. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Summary of the basics ā€¢ Convolutional codes are block codes with memory. Convolutional codes generalize linear block codes in a natural way. ā€¢ Convolutional codes are capable of decoding a large number of errors per time interval require a large free distance and a good distance proļ¬le. ā€¢ In order to construct good convolutional codes we need to construct superregular matrices over F: Diļ¬ƒcult. ā€¢ Convolutional codes treat the information as a stream: Great potential for video streaming ā€¢ A lots of Mathematics are needed: Linear algebra, systems theory, ļ¬nite ļ¬elds, rings/module theory, etc...
  • 96. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes Muchas gracias por la invitacion!