A cornerstone of our digital society sytems from the explosive growth of multimedia traffic in general and video in particular. Video already accounts for over 50% of the internet traffic today and mobile video traffic is expected to grow by a factor of more than 20 in the next five years. This massive volume of data has resulted in a strong demand for implementing highly efficient approaches for video transmission. Error correcting codes for reliable communication have been studied for several decades. However, ideal coding techniques for video streaming are fundamentally different from the classical error corr ection codes. In order to be optimized they must operate under low latency, sequential encoding and decoding constrains, and as such they must inherently have a convolutional structure. Such unique constrains lead to fascinating new open problems in the de sign of error correction codes. In this talk we aim to look from a system theoretical perspective at these problems. In particular we propose the use of convolutional code.
LDPC Encoding is explained in this ppt. for MATLAB code and more information you can visit link given below:
http://www.slideshare.net/bhagwatsinghmahecha/itc-final-report
Introduction to Convolutional Codes
Convolutional Encoder Structure
Convolutional Encoder Representation(Vector, Polynomial, State Diagram and Trellis Representations )
Maximum Likelihood Decoder
Viterbi Algorithm
MATLAB Simulation
Hard and Soft Decisions
Bit Error Rate Tradeoff
Consumed Time Tradeoff
Describes principles of asynchronous serial communication. Explains and compares the principles and features of RS-232, RS-422 and RS-485 standards. Also outlines various registers and programming of PC16550 Universal Asynchronous Receiver/Transmitter provided by many microcontrollers,
Overview about MIMO
Contents:
Diversity Definition
Why Diversity
Types of Diversity
Types of combining
MIMO Definition
Why MIMO ?
MIMO Advantages and disadvantages
Applications of MIMO
This presentation explains basics of Coding Theory in easy and detailed manner with derivations, explanations and examples. It prepares the users for advance Error Control Codes.
LDPC Encoding is explained in this ppt. for MATLAB code and more information you can visit link given below:
http://www.slideshare.net/bhagwatsinghmahecha/itc-final-report
Introduction to Convolutional Codes
Convolutional Encoder Structure
Convolutional Encoder Representation(Vector, Polynomial, State Diagram and Trellis Representations )
Maximum Likelihood Decoder
Viterbi Algorithm
MATLAB Simulation
Hard and Soft Decisions
Bit Error Rate Tradeoff
Consumed Time Tradeoff
Describes principles of asynchronous serial communication. Explains and compares the principles and features of RS-232, RS-422 and RS-485 standards. Also outlines various registers and programming of PC16550 Universal Asynchronous Receiver/Transmitter provided by many microcontrollers,
Overview about MIMO
Contents:
Diversity Definition
Why Diversity
Types of Diversity
Types of combining
MIMO Definition
Why MIMO ?
MIMO Advantages and disadvantages
Applications of MIMO
This presentation explains basics of Coding Theory in easy and detailed manner with derivations, explanations and examples. It prepares the users for advance Error Control Codes.
The presentation gives basic insight into Information Theory, Entropies, various binary channels, and error conditions. It explains principles, derivations and problems in very easy and detailed manner with examples.
BCH codes, part of the cyclic codes, are very powerful error correcting codes widely used in the information coding techniques. This presentation explains these codes with an example.
This presentation covers:
Some basic definitions & concepts of digital communication
What is Phase Shift Keying(PSK) ?
Binary Phase Shift Keying ā BPSK
BPSK transmitter & receiver
Advantages & Disadvantages of BPSK
Pi/4 ā QPSK
Pi/4 ā QPSK transmitter & receiver
Advantages of Pi/4- QPSK
The presentation describes Measures of Information, entropy, source coding, source coding theorem, huffman coding, shanon fano coding, channel capacity theorem, capacity of a discrete and continuous memoryless channel, Error Free Communication over a Noisy Channel
Cyclic Redundancy Codes (CRCs) provide a first line of defense against data corruption in many networks. Unfortunately, many commonly used CRC polynomials provide significantly less error detection capability than they might. An exhaustive exploration reveals that most previously published CRC polynomials are either inferior to alternatives or are only good choices for particular message lengths.
Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...Waqas Afzal
Ā
Sampling Theorem
Quantization
Noise and its types
Encoding-PCM
Power of Signal
Signal to noise Ratio
Channel Capacity
Nyquist Bandwidth
Shannon Capacity Formula
Multirate Digital Signal Processing-Up/Down Sampling
Applications
The presentation gives basic insight into Information Theory, Entropies, various binary channels, and error conditions. It explains principles, derivations and problems in very easy and detailed manner with examples.
BCH codes, part of the cyclic codes, are very powerful error correcting codes widely used in the information coding techniques. This presentation explains these codes with an example.
This presentation covers:
Some basic definitions & concepts of digital communication
What is Phase Shift Keying(PSK) ?
Binary Phase Shift Keying ā BPSK
BPSK transmitter & receiver
Advantages & Disadvantages of BPSK
Pi/4 ā QPSK
Pi/4 ā QPSK transmitter & receiver
Advantages of Pi/4- QPSK
The presentation describes Measures of Information, entropy, source coding, source coding theorem, huffman coding, shanon fano coding, channel capacity theorem, capacity of a discrete and continuous memoryless channel, Error Free Communication over a Noisy Channel
Cyclic Redundancy Codes (CRCs) provide a first line of defense against data corruption in many networks. Unfortunately, many commonly used CRC polynomials provide significantly less error detection capability than they might. An exhaustive exploration reveals that most previously published CRC polynomials are either inferior to alternatives or are only good choices for particular message lengths.
Sampling Theorem, Quantization Noise and its types, PCM, Channel Capacity, Ny...Waqas Afzal
Ā
Sampling Theorem
Quantization
Noise and its types
Encoding-PCM
Power of Signal
Signal to noise Ratio
Channel Capacity
Nyquist Bandwidth
Shannon Capacity Formula
Multirate Digital Signal Processing-Up/Down Sampling
Applications
Error Detection and correction concepts in Data communication and networksNt Arvind
Ā
single bit , burst error detection and correction in data communication networks , block coding ( hamming code , simple parity check code , Cyclic redundancy check-CRC , checksum , internet checksum etc
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Digital electronics(EC8392) unit- 1-Sesha Vidhya S/ ASP/ECE/RMKCETSeshaVidhyaS
Ā
Number systems, Number conversion,Logic Gates,Boolean Theorem and Laws,Boolean Simplification,NAND,NOR Implementation,K-MAP simplification and Tabulation Method
Similar to Error-Correcting codes: Application of convolutional codes to Video Streaming (20)
Designing RISC-V-based Accelerators for next generation Computers (DRAC) is a 3-year project (2019-2022) funded by the ERDF Operational Program of Catalonia 2014-2020. DRAC will design, verify, implement and fabricate a high performance general purpose processor that will incorporate different accelerators based on the RISC-V technology, with specific applications in the field of post-quantum security, genomics and autonomous navigation. In this talk, we will provide an overview of the main achievements in the DRAC project, including the fabrication of Lagarto, the first RISC-V processor developed in Spain.
This talk will begin introducing the uElectronics section of ESA at ESTEC and the general activities the group is responsible for. Then, it will go through some of the R+D on-going activities that the group is involved with, hand in hand with universities and/or companies. One of the major ones is related to the European rad-hard FPGAs that have been partially founded by ESA for several years and that will be playing a major role in the sector in the upcoming years. ItĀ“s also worth talking about the RTL soft IPs that are currently under development and that will allow us to keep on providing the European ecosystem with some key capabilities. The latter will be an overview of RISC-V space hardened on-going activities that might be replacing the current SPARC based processors available for our missions.
El objetivo de esta charla es presentar las Ćŗltimas novedades incorporadas en la arquitectura ARM y describir las tendencias en la microarquitectura de los procesadores con arquitectura ARM. ARM es una empresa relativamente pequeƱa en comparaciĆ³n con otros gigantes del sector tecnolĆ³gico. Sin embargo, la amplia implantaciĆ³n de su arquitectura, siendo ampliamente dominante en algunos sectores, y sus microarquitecturas, hacen que la tecnologĆa ARM ocupe un lugar central en el desarrollo tecnolĆ³gico del mundo actual. La tecnologĆa ARM estĆ” presente prĆ”cticamente en todo el espectro tecnolĆ³gico, desde los dispositivos mĆ”s sencillos hasta el HPC y Cloud computing, pasando por los smartphones, automociĆ³n electrĆ³nica de consumo, etc
"Formal verification has been used by computer scientists for decades to prevent
software bugs. However, with a few exceptions, it has not been used by researchers
working in most areas of mathematics (geometry, algebra, analysis, etc.). In this
talk, we will discuss how this has changed in the past few years, and the possible
implications to the future of mathematical research, teaching and communication.
We will focus on the theorem prover Lean and its mathematical library
mathlib, since this is currently the system most widely used by mathematicians.
Lean is a functional programming language and interactive theorem prover based
on dependent type theory, with proof irrelevance and non-cumulative universes.
The mathlib library, open-source and designed as a basis for research level
mathematics, is one of the largest collections of formalized mathematics. It allows
classical reasoning, uses large- and small-scale automation, and is characterized
by its decentralized nature with over 200 contributors, including both computer
scientists and mathematicians."
"Part of the research community thinks that it is still early to tackle the development of quantum software engineering techniques. The reason is that how the quantum computers of the future will look like is still unknown. However, there are some facts that we can affirm today: 1) quantum and classical computers will coexist, each dedicated to the tasks at which they are most efficient. 2) quantum computers will be part of the cloud infrastructure and will be accessible through the Internet. 3) complex software systems will be made up of smaller pieces that will collaborate with each other. 4) some of those pieces will be quantum, therefore the systems of the future will be hybrid. 5) the coexistence and interaction between the components of said hybrid systems will be supported by service composition: quantum services.
This talk analyzes the challenges that the integration of quantum services poses to Service Oriented Computing."
In this talk, after a brief overview of AI concepts in particular Machine Learning (ML) techniques, some of the well-known computer design concepts for high performance and power efficiency are presented.Ā Subsequently, those techniques that have had a promising impact for computing ML algorithms are discussed.Ā Deep learning has emerged as a game changer for many applications in various fields of engineering and medical sciences.Ā Although the primary computation function is matrix vector multiplication, many competing efficient implementations of this primary function Ā have been proposed and put into practice.Ā This talk will review and compare some of those techniques that are used for ML computer design.
Many emerging applications require methods tailored towards high-speed data acquisition and filtering of streaming data followed by offline event reconstruction and analysis. In this case, the main objective is to relieve the immense pressure on the storage and communication resources within the experimental infrastructure. In other applications, ultra low latency real time analysis is required for autonomous experimental systems and anomaly detection in acquired scientific data in the absence of any prior data model for unknown events. At these data rates, traditional computing approaches cannot carry out even cursory analyses in a time frame necessary to guide experimentation. In this talk, Prof. Ogrenci will present some examples of AI hardware architectures. She will discuss the concept of co-design, which makes the unique needs of an application domain transparent to the hardware design process and present examples from three applications: (1) An in-pixel AI chip built using the HLS methodology; (2) A radiation hardened ASIC chip for quantum systems; (3) An FPGA-based edge computing controller for real-time control of a High Energy Physics experiment.
Quantum computing has become a noteworthy topic in academia and industry. The multinational companies in the world have been obtaining impressive advances in all areas of quantum technology during the last two decades. These companies try to construct real quantum computers in order to exploit their theoretical preferences over todayās classical computers in practical applications. However, they are challenging to build a full-scale quantum computer because of their increased susceptibility to errors due to decoherence and other quantum noise. Therefore, quantum error correction (QEC) and fault-tolerance protocol will be essential for running quantum algorithms on large-scale quantum computers.
The overall effect of noise is modeled in terms of a set of Pauli operators and the identity acting on the physical qubits (bit flip, phase flip and a combination of bit and phase flips). In addition to Pauli errors, there is another error named leakage errors that occur when a qubit leaves the defined computational subspace. As the location of leakage errors is unknown, these can damage even more the quantum computations. Thus, this talk will briefly provide quantum error models.
Many HPC applications are massively parallel and can benefit from the spatial parallelism offered by reconfigurable logic. While modern memory technologies can offer high bandwidth, designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. Addressing these challenges requires to combine compiler optimizations, high-level synthesis, and hardware design.
In this talk, I will present challenges, solutions, and trends for generating massively parallel accelerators on FPGA for high-performance computing. These architectures can provide performance comparable to software implementations on high-end processors, and much higher energy efficiency thanks to logic customization.
The main challenge of concurrent software verification has always been in achieving modularity, i.e., the ability to divide and conquer the correctness proofs with the goal of scaling the verification effort. Types are a formal method well-known for its ability to modularize programs, and in the case of dependent types, the ability to modularize and scale complex mathematical proofs.
In this talk I will present our recent work towards reconciling dependent types with shared memory concurrency, with the goal of achieving modular proofs for the latter. Applying the type-theoretic paradigm to concurrency has lead us to view separation logic as a type theory of state, and has motivated novel abstractions for expressing concurrency proofs based on the algebraic structure of a resource and on structure-preserving functions (i.e., morphisms) between resources.
Microarchitectural attacks, such as Spectre and Meltdown, are a class of
security threats that affect almost all modern processors. These attacks exploit the side-effects resulting from processor optimizations to leak sensitive information and compromise a systemās security.
Over the years, a large number of hardware and software mechanisms for
preventing microarchitectural leaks have been proposed. Intuitively, more
defensive mechanisms are less efficient, while more permissive mechanisms may offer more performance but require more defensive programming. Unfortunately, there are no
hardware-software contracts that would turn this intuition into a basis for
principled co-design.
In this talk, we present a framework for specifying hardware/software security
contracts, an abstraction that captures a processorās security guarantees in a
simple, mechanism-independent manner by specifying which program executions a
microarchitectural attacker can distinguish.
La apariciĆ³n de vulnerabilidades por la falta de controles de seguridad es una de las causas por las que se demandan nuevos marcos de trabajo que produzcan software seguro de forma predeterminada. En la conferencia se abordarĆ” cĆ³mo transformar el proceso de desarrollo de software dando la importancia que merece la seguridad desde el inicio del ciclo de vida. Para ello se propone un nuevo modelo de desarrollo ā modelo Viewnext-UEx ā que incorpora prĆ”cticas de seguridad de forma preventiva y sistemĆ”tica en todas las fases del proceso de ciclo de vida del software. El propĆ³sito de este nuevo modelo es anticipar la detecciĆ³n de vulnerabilidades aplicando la seguridad desde las fases mĆ”s tempranas, a la vez que se optimizan los procesos de construcciĆ³n del software. Se exponen los resultados de un escenario preventivo, tras la aplicaciĆ³n del modelo Viewnext-UEx, frente al escenario reactivo tradicional de aplicar la seguridad a partir de la fase de testing.
This lecture will address issues of trust in computer systems, artificial intelligence and attacks on these types of systems with practical examples. Artificial Intelligence has gained ground in several areas with different applications scenarios, but in the perspective of this lecture, the fundamental point of the discussion is: what does an artificial intelligence system should do from a security perspective and how does an intelligence system provide results on a given subject? Few people are really concerned about the behavior of these types of systems from a security point of view. If you like machine learning and security, I believe this lecture will show you interesting security problems in artificial intelligence systems.
As the world's energy demand rises, so does the amount of renewable energy, particularly wind energy, in the supply. The life cycle of wind farms starting from manufacturing the components to decommission stage involve significant involvement of cost and the application of AI and data analytics are on reducing these costs are limited. With this conference talk, the audience expected to know some of the interesting applications of AI and data analytics on offshore wind. And, also highlight the future challenges and opportunities. This conference could be useful for students, academics and researcher who want to make next career in offshore wind but yet know where to start.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Ā
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Ā
Clients donāt know what they donāt know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clientsā needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Ā
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
Ā
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Ā
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
Ā
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
Ā
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
Ā
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Ā
Monitoring and observability arenāt traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current companyās observability stack.
While the dev and ops silo continues to crumbleā¦.many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Ā
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
š Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
Ā
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Ā
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Ā
Error-Correcting codes: Application of convolutional codes to Video Streaming
1. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error-Correcting codes: Application of
convolutional codes to Video Streaming
Diego Napp
Department of Mathematics, Universidad of Aveiro, Portugal
July 22, 2016
2. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Overview
Introduction to Error-correcting codes
Two challenges that recently emerged
Block codes vs convolutional codes
3. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā¢ want to store bits on magnetic storage device
ā¢ or send a message (sequence of zeros/ones)
4. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā¢ want to store bits on magnetic storage device
ā¢ or send a message (sequence of zeros/ones)
ā¢ Bits get corrupted, 0 ā 1 or 1 ā 0, but rarely.
5. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā¢ want to store bits on magnetic storage device
ā¢ or send a message (sequence of zeros/ones)
ā¢ Bits get corrupted, 0 ā 1 or 1 ā 0, but rarely.
What happens when we store/send information and errors occur?
6. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā¢ want to store bits on magnetic storage device
ā¢ or send a message (sequence of zeros/ones)
ā¢ Bits get corrupted, 0 ā 1 or 1 ā 0, but rarely.
What happens when we store/send information and errors occur?
can we detect them? correct?
7. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The International Standard Book Number (ISBN)
It can be proved that all possible valid ISBN-10ās have at least two
digits diļ¬erent from each other.
ISBN-10:
x1 ā x2x3x4 ā x5x6x7x8x9 ā x10
satisfy
10
i=1
ixi = 0 mod 11
For example, for an ISBN-10 of 0-306-40615-2:
s = (0 Ć 10) + (3 Ć 9) + (0 Ć 8) + (6 Ć 7)+
+ (4 Ć 6) + (0 Ć 5) + (6 Ć 4) + (1 Ć 3) + (5 Ć 2) + (2 Ć 1)
= 0 + 27 + 0 + 42 + 24 + 0 + 24 + 3 + 10 + 2
= 132 = 12 Ć 11
8. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā 1 1 0 1 Ā· Ā· Ā·
9. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā 1 1 0 1 Ā· Ā· Ā·
ā¢ Good: Very easy Encoding / decoding
ā¢ Bad: Rate 1/3
10. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā 1 1 0 1 Ā· Ā· Ā·
ā¢ Good: Very easy Encoding / decoding
ā¢ Bad: Rate 1/3
Can we do better???
11. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Another Solution
ā¢ Break the message into 2 bits blocks m = u1 u2 ā F2
ā¢ Encode each block as follows:
u āā uG
where
G =
1 0 1
0 1 1
;
(u1, u2)
1 0 1
0 1 1
= u1 u2 u1 + u2
12. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Another Solution
ā¢ Break the message into 2 bits blocks m = u1 u2 ā F2
ā¢ Encode each block as follows:
u āā uG
where
G =
1 0 1
0 1 1
;
(u1, u2)
1 0 1
0 1 1
= u1 u2 u1 + u2
ā¢ Better Rate 2/3
ā¢ I can detect 1 error...but I cannot correct.
13. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Break the message into 3 bits blocks m = 1 1 0 ā F3
ā¢ Encode each block as follows:
u āā uG G =
ļ£«
ļ£
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø ;
14. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Break the message into 3 bits blocks m = 1 1 0 ā F3
ā¢ Encode each block as follows:
u āā uG G =
ļ£«
ļ£
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø ;
For example
(1, 1, 0)
ļ£«
ļ£
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø = (1, 1, 0, 0, 1, 1);
(1, 0, 1)
ļ£«
ļ£
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø = (1, 0, 1, 1, 0, 1);
etc...
15. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Rate 3/6 = 1/2, better than before (1/3).
ā¢ Only 23 codewords in F6
C = {(1, 0, 0, 1, 1, 0), (0, 1, 0, 1, 0, 1), (0, 0, 1, 0, 1, 1), (1, 1, 0, 0, 1, 1),
(1, 0, 1, 1, 0, 1), (0, 1, 1, 1, 1, 0), (1, 1, 1, 0, 0, 0), (0, 0, 0, 0, 0, 0)}
ā¢ In F6 we have 26 possible vectors
16. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Rate 3/6 = 1/2, better than before (1/3).
ā¢ Only 23 codewords in F6
C = {(1, 0, 0, 1, 1, 0), (0, 1, 0, 1, 0, 1), (0, 0, 1, 0, 1, 1), (1, 1, 0, 0, 1, 1),
(1, 0, 1, 1, 0, 1), (0, 1, 1, 1, 1, 0), (1, 1, 1, 0, 0, 0), (0, 0, 0, 0, 0, 0)}
ā¢ In F6 we have 26 possible vectors
ā¢ Any two codewords diļ¬er at least in 3 coordinates. I can
detect and correct 1 error!!!
17. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Hamming distance
The intuitive concept of āclosenessā of two words is well
formalized through Hamming distance h(x, y) of words x, y. For
two words x, y
h(x, y) = the number of symbols x and y diļ¬er.
A code C is a subset of Fn, F a ļ¬nite ļ¬eld. An important
parameter of C is its minimal distance.
dist(C) = min{h(x, y) | x, y ā C, x = y},
18. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Hamming distance
The intuitive concept of āclosenessā of two words is well
formalized through Hamming distance h(x, y) of words x, y. For
two words x, y
h(x, y) = the number of symbols x and y diļ¬er.
A code C is a subset of Fn, F a ļ¬nite ļ¬eld. An important
parameter of C is its minimal distance.
dist(C) = min{h(x, y) | x, y ā C, x = y},
Theorem (Basic error correcting theorem)
1. A code C can detected up to s errors if dist(C) ā„ s + 1.
2. A code C can correct up to t errors if dist(C) ā„ 2t + 1.
19. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
An (n, k) block code C is a k-dimensional subspace of Fn and the
rows of G form a basis of C
C = ImFG = uG : u ā Fk
(1)
20. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
An (n, k) block code C is a k-dimensional subspace of Fn and the
rows of G form a basis of C
C = ImFG = uG : u ā Fk
(1)
Main coding theory problem
1. Construct codes that can correct a maximal number of errors
while using a minimal amount of redundancy (rate)
2. Construct codes (as above) with eļ¬cient encoding and
decoding procedures
21. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Coding theory develops methods to protect information
against errors.
ā¢ Cryptography develops methods how to protect information
against an enemy (or an unauthorized user).
ā¢ Coding theory - theory of error correcting codes - is one of the
most interesting and applied part of mathematics and
informatics.
ā¢ All real systems that work with digitally represented data, as
CD players, TV, fax machines, internet, satelites, mobiles,
require to use error correcting codes because all real channels
are, to some extent, noisy.
ā¢ Coding theory methods are often elegant applications of very
basic concepts and methods of (abstract) algebra.
22. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
23. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
ā¢ Application of convolutional codes to Distributed Storage
Systems.
24. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
ā¢ Application of convolutional codes to Distributed Storage
Systems.
ā¢ Application of convolutional codes Network Coding, in
particular to Video Streaming.
25. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā¢ Fast-growing demand for large-scale data storage.
26. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā¢ Fast-growing demand for large-scale data storage.
ā¢ Failures occur: Redundancy is needed to ensure resilience ā
Coding theory.
27. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā¢ Fast-growing demand for large-scale data storage.
ā¢ Failures occur: Redundancy is needed to ensure resilience ā
Coding theory.
ā¢ Data is stored over a network of nodes: Peer-to-peer or data
centers ā Distributed storage systems.
28. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Coding Theory
ā¢ A ļ¬le u = (u1, . . . , uk) ā Fk
q is redundantly stored across n
nodes
v = (v1, . . . , vn) = uG,
where G is the generator matrix of an (n, k)-code C.
29. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ What happens when nodes fail? The repair problem.
30. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ What happens when nodes fail? The repair problem.
ā¢ Metrics:
ā¢ Repair bandwidth
ā¢ Storage cost
ā¢ Locality
31. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
32. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
ā¢ Locality ā„ 2 (if the code is not a replication).
ā¢ There is a natural trade-oļ¬ between distance and locality.
33. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
ā¢ Locality ā„ 2 (if the code is not a replication).
ā¢ There is a natural trade-oļ¬ between distance and locality.Theorem (Gopalan et. al,2012)
Let C be an (n, k) linear code with minimum distance d and
locality r. Then
n ā k + 1 ā d ā„
k ā 1
r
.
34. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Pyramid Codes are optimal with respect to this bound
ā¢ Pyramid codes were Implemented in Facebook and Windows
Azure Storage (release in 2007).
ā¢ But if two erasures occur .... how to repair multiple erasures?
35. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Pyramid Codes are optimal with respect to this bound
ā¢ Pyramid codes were Implemented in Facebook and Windows
Azure Storage (release in 2007).
ā¢ But if two erasures occur .... how to repair multiple erasures?
ā¢ ....Next time... Today we focus in Video Streaming.
36. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā¢ Explosive growth of multimedia traļ¬c in general and in video
in particular.
37. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā¢ Explosive growth of multimedia traļ¬c in general and in video
in particular.
ā¢ Video already accounts for over 50 % of the internet traļ¬c
today and mobile video traļ¬c is expected to grow by a factor
of more than 20 in the next ļ¬ve years [1].
[1] Cisco: Forecast and Methodology 2012-2017.
38. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā¢ Strong demand for implementing highly eļ¬cient approaches
for video transmission.
39. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Network Coding
How is the best way to disseminate information over a network?
40. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Network Coding
How is the best way to disseminate information over a network?
Linear random network coding
It has been proven that linear coding is enough to achieve the
upper bound in multicast problems with one or more sources. It
optimizes the throughput.
41. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
42. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
ā¢ These packets propagate through the network. Each node
creates a random -linear combination of the packets it has
available and transmits this random combination.
43. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
ā¢ These packets propagate through the network. Each node
creates a random -linear combination of the packets it has
available and transmits this random combination.
ā¢ Finally, the receiver collects such randomly generated packets
and tries to infer the set of packets injected into the network
44. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā¢ Rank metric codes are matrix codes C ā FmĆn
q , armed with
the rank distance
drank(X, Y ) = rank(X ā Y ), where X, Y ā FmĆn
q .
45. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā¢ Rank metric codes are matrix codes C ā FmĆn
q , armed with
the rank distance
drank(X, Y ) = rank(X ā Y ), where X, Y ā FmĆn
q .
ā¢ For linear (n, k) rank metric codes over Fqm with m ā„ n the
following analog of the Singleton bound holds,
drank(C) ā¤ n ā k + 1.
46. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā¢ Rank metric codes are matrix codes C ā FmĆn
q , armed with
the rank distance
drank(X, Y ) = rank(X ā Y ), where X, Y ā FmĆn
q .
ā¢ For linear (n, k) rank metric codes over Fqm with m ā„ n the
following analog of the Singleton bound holds,
drank(C) ā¤ n ā k + 1.
ā¢ The code that achieves this bound is called Maximum Rank
Distance (MRD). Gabidulin codes are a well-known class of
MRD codes.
47. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
48. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā¢ Creating dependencies among the transmitted codewords of
diļ¬erent shots can improve the error-correction capabilities
49. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā¢ Creating dependencies among the transmitted codewords of
diļ¬erent shots can improve the error-correction capabilities
ā¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
50. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā¢ Creating dependencies among the transmitted codewords of
diļ¬erent shots can improve the error-correction capabilities
ā¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
ā¢ Although the use of convolutional codes is widespread, its
application to video streaming (or using the rank metric) is
yet unexplored.
51. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā¢ Creating dependencies among the transmitted codewords of
diļ¬erent shots can improve the error-correction capabilities
ā¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
ā¢ Although the use of convolutional codes is widespread, its
application to video streaming (or using the rank metric) is
yet unexplored.
ā¢ We propose a novel scheme that add complex dependencies to
data streams in a quite simple way
52. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Block codes vs convolutional codes
. . . u2, u1, u0
G
āāāāā . . . v2 = u2G, v1 = u1G, v0 = u0G
represented in a polynomial fashion
Ā· Ā· Ā· + u2D2
+ u1D + u0
G
āāāāā Ā· Ā· Ā· + u2G
v2
D2
+ u1G
v1
D + u0G
v0
substitute G by G(D) = G0 + G1D + Ā· Ā· Ā· + GsDs?
...u2D2
+ u1D + u0
G(D)
āāā...(u2G0 + u1G1 + u0G2)
v2
D2
+(u1G0 + u0G1)
v1
D +u0G0
v0
53. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Block codes vs convolutional codes
. . . u2, u1, u0
G
āāāāā . . . v2 = u2G, v1 = u1G, v0 = u0G
represented in a polynomial fashion
Ā· Ā· Ā· + u2D2
+ u1D + u0
G
āāāāā Ā· Ā· Ā· + u2G
v2
D2
+ u1G
v1
D + u0G
v0
substitute G by G(D) = G0 + G1D + Ā· Ā· Ā· + GsDs?
...u2D2
+ u1D + u0
G(D)
āāā...(u2G0 + u1G1 + u0G2)
v2
D2
+(u1G0 + u0G1)
v1
D +u0G0
v0
Block codes: C = {uG} = ImFG ā¼ {u(D)G} = ImFG(D)
Convolutional codes: C = {u(D)G(D)} = ImF((D))G(D)
54. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
55. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
A matrix G(D) whose rows form a bases for C is called an encoder.
If C has rank k then we say the C has rate k/n.
C = ImF((D))G(D) = u(D)G(D) : u(D) ā Fk
((D))
= KerF[D]H(D) = {v(D) ā Fn
[D] : v(D)H(D) = 0}
where H(D) is called the parity-check of C.
56. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
A matrix G(D) whose rows form a bases for C is called an encoder.
If C has rank k then we say the C has rate k/n.
C = ImF((D))G(D) = u(D)G(D) : u(D) ā Fk
((D))
= KerF[D]H(D) = {v(D) ā Fn
[D] : v(D)H(D) = 0}
where H(D) is called the parity-check of C.
Remark
One can also consider the ring of polynomials F[D] instead of
Laurent series F((D)) and deļ¬ne C as a F[D]-module of Fn[D].
57. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
A convolutional encoder is also a linear device which maps
u(0), u(1), Ā· Ā· Ā· āā v(0), v(1), . . .
In this sense it is the same as block encoders. The diļ¬erence is
that the convolutional encoder has an internal āstorage vectorā or
āmemoryā.
v(i) does not depend only on u(i) but also on the storage vector
x(i)
x(i + 1) = Ax(i) + Bu(i)
v(i) = Cx(i) + Eu(i) (2)
A ā FĪ“ĆĪ“, B ā FĪ“Ćk, C ā FnĆĪ“, E ā FĪ“Ćk.
58. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
ā // //
...0, 1, 1 // 0 //
;;
##
0
// 0
bb
||
ā // //
59. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
ā // // 1
...0, 1 // 1 //
0
// 0
bb
||
ā // // 1
60. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
ā // // 1, 1
...0 // 1 //
1
// 0
bb
||
ā // // 0, 1
61. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
ā // // 1, 1, 1
// 0 //
1
// 1
bb
||
ā // // 0, 0, 1
62. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
Let the convolutional code be given by matrices
x1(i + 1)
x2(i + 1)
=
0 1
0 0
x1(i)
x2(i)
+
1
0
u(i)
v1(i)
v2(i)
=
1 0
1 1
x1(i)
x2(i)
+
1
1
u(i)
We can compute an encoder
G(D) = E + B(Dā1
Im ā A)ā1
C = 1 + D + D2 1 + D2
63. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the encoder
G(D) = 1 + D + D2 1 + D2 . This encoder has degree 2 and
memory 2.
Clearly any matrix which is F(D)-equivalent to G(D) is also an
encoder.
G (D) = 1 1+D2
1+D+D2
64. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the generator matrix G (D). This
encoder has degree 2 and inļ¬nite memory.
65. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the catastrophic encoder
G (D) = 1 + D3 1 + D + D2 + D3 . This encoder has
degree 3 and memory 3.
66. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
67. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
Deļ¬nition
A generator matrix G(D) is said to be catastrophic if for every
v(D) = u(D)G(D)
supp(v(D)) is ļ¬nite ā supp(u(D)) is ļ¬nite
68. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
Deļ¬nition
A generator matrix G(D) is said to be catastrophic if for every
v(D) = u(D)G(D)
supp(v(D)) is ļ¬nite ā supp(u(D)) is ļ¬nite
Theorem
G(D) is non-catastrophic if G(D) admits a polynomial right
inverse.
69. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
The encoder
G(D) =
1 + D 0 1 D
1 D 1 + D 0
is noncatastrophic as an inverse is
H(D) =
ļ£«
ļ£¬
ļ£¬
ļ£
0 1
0 0
0 0
Dā1 1 + Dā1
ļ£¶
ļ£·
ļ£·
ļ£ø
70. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Historical Remarks
ā¢ Convolutional codes were introduced by Elias (1955)
ā¢ The theory was imperfectly understood until a series of papers
of Forney in the 70ās on the algebra of the k Ć n matrices over
the ļ¬eld of rational functions in the delay operator D.
ā¢ Became widespread in practice with the Viterbi decoding.
They belong to the most widely implemented codes in
(wireless) communications.
ā¢ The ļ¬eld is typically F2 but in the last decade a renewed
interest has grown for convolutional codes over large ļ¬elds
trying to fully exploit the potential of convolutional codes.
71. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
72. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā¢ Decoding over the symmetric channel is, in general, diļ¬cult.
73. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā¢ Decoding over the symmetric channel is, in general, diļ¬cult.
ā¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬cient.
74. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā¢ Decoding over the symmetric channel is, in general, diļ¬cult.
ā¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬cient.
ā¢ Convolutional codes over large alphabets have attracted much
attention in recent years.
75. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā¢ Decoding over the symmetric channel is, in general, diļ¬cult.
ā¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬cient.
ā¢ Convolutional codes over large alphabets have attracted much
attention in recent years.
ā¢ In [Tomas, Rosenthal, Smarandache 2012]: Decoding over the
erasure channel is easy and Viterbi is not needed, just linear
algebra.
76. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
77. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) ā C and v(D) = 0}
78. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) ā C and v(D) = 0}
ā¢ We are interested in the maximum possible value of dfree(C)
79. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) ā C and v(D) = 0}
ā¢ We are interested in the maximum possible value of dfree(C)
ā¢ For block codes (Ī“ = 0) we know that maximum value is
given by the Singleton bound: n ā k + 1
80. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) ā C and v(D) = 0}
ā¢ We are interested in the maximum possible value of dfree(C)
ā¢ For block codes (Ī“ = 0) we know that maximum value is
given by the Singleton bound: n ā k + 1
ā¢ This bound can be achieve if |F| n
81. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Theorem
Rosenthal and Smarandache (1999) showed that the free distance
of convolutional code of rate k/n and degree Ī“ must be upper
bounded by
dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1. (3)
A code achieving (3) is called Maximum Distance Separable (MDS)
and if it achieves it āas fast as possibleā is called strongly MDS.
82. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Theorem
Rosenthal and Smarandache (1999) showed that the free distance
of convolutional code of rate k/n and degree Ī“ must be upper
bounded by
dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1. (3)
A code achieving (3) is called Maximum Distance Separable (MDS)
and if it achieves it āas fast as possibleā is called strongly MDS.
ā¢ Allen conjecture (1999) the existence of convolutional codes
that are both sMDS and MDP when k = 1 and n = 2.
ā¢ Rosenthal and Smarandache (2001), provided the ļ¬rst
concrete construction of MDS convolutional codes
ā¢ Gluessing-Luerssen, et. al. (2006), provided the ļ¬rst
construction of strongly MDS when (n ā k)|Ī“.
ā¢ Napp and Smarandache (2016) provided the ļ¬rst construction
of strongly MDS for all rates and degrees.
83. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
Another important distance measure for a convolutional code is
the jth column distance dc
j (C), (introduced by Costello), given by
dc
j (C) = min wt(v[0,j](D)) | v(D) ā C and v0 = 0
where v[0,j](D) = v0 + v1D + . . . + vj Dj represents the j-th
truncation of the codeword v(D) ā C.
84. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā¤ dc
1 ā¤ Ā· Ā· Ā· ā¤ lim
jāā
dc
j (C) = dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā¤ (n ā k)(j + 1) + 1, (4)
85. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā¤ dc
1 ā¤ Ā· Ā· Ā· ā¤ lim
jāā
dc
j (C) = dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā¤ (n ā k)(j + 1) + 1, (4)
86. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā¤ dc
1 ā¤ Ā· Ā· Ā· ā¤ lim
jāā
dc
j (C) = dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā¤ (n ā k)(j + 1) + 1, (4)
How do we construct MDP?
The construction of MDP convolutional boils down to the
construction of Superregular matrices
87. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
LT-Superregular matrices
Deļ¬nition [Gluesing-Luerssen,Rosenthal,Smadandache (2006)]
A lower triangular matrix
B =
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£
a0
a1 a0
...
...
...
aj ajā1 Ā· Ā· Ā· a0
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£ø
(5)
is LT-superregular if all of its minors, with no zeros in the diagonal,
are nonsingular.
Remark
Note that due to such a lower triangular conļ¬guration the
remaining minors are necessarily zero.
89. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Remarks
ā¢ Construction of classes of LT-superregular matrices is very
diļ¬cult due to their triangular conļ¬guration.
ā¢ Only two classes exist:
1. Rosenthal et al. (2006) presented the ļ¬rst construction. For
any n there exists a prime number p such that
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£
n
0
nā1
1
n
0
...
...
...
nā1
nā1 Ā· Ā· Ā· nā1
1
n
0
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£ø
ā FnĆn
p
is LT-superregular. Bad news: Requires a ļ¬eld with very large
characteristic.
90. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Remarks
2. Almeida, Napp and Pinto (2013) ļ¬rst construction over any
characteristic: Let Ī± be a primitive element of a ļ¬nite ļ¬eld F
of characteristic p. If |F| ā„ p2M
then the following matrix
ļ£®
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£°
Ī±20
Ī±21
Ī±20
Ī±22
Ī±21
Ī±20
...
...
...
Ī±2Mā1
Ā· Ā· Ā· Ā· Ā· Ā· Ī±20
ļ£¹
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£»
.
is LT-superregular. Bad news: |F| very large.
91. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Performance over the erasure channel
Theorem
Let C be an (n, k, Ī“) convolutional code and dc
j0
the j = j0 -th
column distance. If in any sliding window of length (j0 + 1)n at
most dc
j0
ā 1 erasures occur then we can recover completely the
transmitted sequence.
92. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Performance over the erasure channel
Theorem
Let C be an (n, k, Ī“) convolutional code and dc
j0
the j = j0 -th
column distance. If in any sliding window of length (j0 + 1)n at
most dc
j0
ā 1 erasures occur then we can recover completely the
transmitted sequence.
Example
Ā· Ā· Ā· vv|
60
Ā· Ā· Ā·
80
vvv Ā· Ā· Ā· vv
60
Ā· vv|vv Ā· Ā· Ā·
A [202, 101] MDS blok code can correct 101 erasures in a window
of 202 symbols (recovering rate 101
202): ā cannot correct this
window.
A (2, 1, 50) MDP convolutional code has also 50% error capability.
(L + 1)n = 101 Ć 2 = 202. Take a window of 120 symbols, correct
and continue until you correct the whole window.
We have ļ¬exibility in choosing the size and position of the sliding
window.
93. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Fundamental Open Problems
ā¢ Come up with LT superregular matrices over small ļ¬elds.
ā¢ How is the minimum ļ¬eld size needed to construct LT
superregular matrices?
ā¢ Typically convolutional codes are decoded via the Viterbi
decoding algorithm. The complexity of this algorithm grows
exponentially with the McMillan degree. New classes of codes
coming with more eļ¬cient decoding algorithms are needed.
ā¢ Good constructions of convolutional codes for rank metric
ā¢ Good constructions tailor made to deal with burst of erasures
(in both Hamming and Rank metric)
94. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
References
G.D. Forney, Jr. (1975)
āMinimal bases of rational vector spaces, with applications to
multivariable linear systemsā,
SIAM J. Control 13, 493ā520, 1975.
G.D. Forney, Jr. (1970)
āConvolutional codes I: algebraic structure ā,
IEEE Trans. Inform. Theory vol. IT-16, pp. 720-738, 1970.
R.J. McEliece (1998)
The Algebraic Theory of Convolutional Codes
Handbook of Coding Theory Vol. 1 , North-Holland,
Amsterdam.
Johannesson, Rolf and Zigangirov, K. (1998)
Fundamentals of convolutional coding
IEEE Communications society and IEEE Information theory
society and Vehicular Technology Society.
95. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Summary of the basics
ā¢ Convolutional codes are block codes with memory.
Convolutional codes generalize linear block codes in a natural
way.
ā¢ Convolutional codes are capable of decoding a large number
of errors per time interval require a large free distance and a
good distance proļ¬le.
ā¢ In order to construct good convolutional codes we need to
construct superregular matrices over F: Diļ¬cult.
ā¢ Convolutional codes treat the information as a stream: Great
potential for video streaming
ā¢ A lots of Mathematics are needed: Linear algebra, systems
theory, ļ¬nite ļ¬elds, rings/module theory, etc...
96. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Muchas gracias por la invitacion!