LDPC - Encoding
LDPC code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC is constructed using a sparse bipartite graph.
In our Project:
Encoding a LDPC code was done in Matlab hardware implementation was done on FPGA-Field ProgrammableGate-Array using Verilog
MIPI DevCon 2016: Testing of MIPI High Speed PHY Standard ImplementationsMIPI Alliance
Interoperability in mobile devices shall be achieved through a variety of protocol standards such as MIPI CSI, DSI, UniPro or JEDEC UFS and their underlying physical layer standards MIPI M-PHY, D-PHY or C-PHY. Integration of different vendors' designs into a working system is simplified using standard conformant parts. Testing them according to the procedures outlined in the applicable Conformance Test Suite guarantees their conformance. However, increasing data rates, lower power dissipation and modularity of mobile devices create challenges for debugging and conformance verification of the affected components. In this presentation, Joel Birch of Keysight Technologies discusses these challenges and offers possible solutions to address them.
Field-programmable gate array\
only for these students that are intrested in Field-programmable gate array
field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). (Circuit diagrams were previously used to specify the configuration, as they were for ASICs
The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable). However, programmable logic was hard-wired between logic gates.[6]
In the late 1980s, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.[6]
Some of the industry's foundational concepts and technologies for programmable logic arrays, gates, and logic blocks are founded in patents awarded to David W. Page and LuVerne R. Peterson in 1985.
LDPC - Encoding
LDPC code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC is constructed using a sparse bipartite graph.
In our Project:
Encoding a LDPC code was done in Matlab hardware implementation was done on FPGA-Field ProgrammableGate-Array using Verilog
MIPI DevCon 2016: Testing of MIPI High Speed PHY Standard ImplementationsMIPI Alliance
Interoperability in mobile devices shall be achieved through a variety of protocol standards such as MIPI CSI, DSI, UniPro or JEDEC UFS and their underlying physical layer standards MIPI M-PHY, D-PHY or C-PHY. Integration of different vendors' designs into a working system is simplified using standard conformant parts. Testing them according to the procedures outlined in the applicable Conformance Test Suite guarantees their conformance. However, increasing data rates, lower power dissipation and modularity of mobile devices create challenges for debugging and conformance verification of the affected components. In this presentation, Joel Birch of Keysight Technologies discusses these challenges and offers possible solutions to address them.
Field-programmable gate array\
only for these students that are intrested in Field-programmable gate array
field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). (Circuit diagrams were previously used to specify the configuration, as they were for ASICs
The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable). However, programmable logic was hard-wired between logic gates.[6]
In the late 1980s, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.[6]
Some of the industry's foundational concepts and technologies for programmable logic arrays, gates, and logic blocks are founded in patents awarded to David W. Page and LuVerne R. Peterson in 1985.
Level sensitive scan design(LSSD) and Boundry scan(BS)Praveen Kumar
This presentation contains,
Introduction,design for testability, scan chain, operation, scan structure, test vectors, Boundry scan, test logic, operation, BS cell, states of TAP controller, Boundry scan instructions.
Join this video course on udemy . Click here :
https://www.udemy.com/course/mastering-microcontroller-with-peripheral-driver-development/?couponCode=SLIDESHARE
In this course, the code is developed such a way that, It can be ported to any MCU you have at your hand.
If you need any help in porting these codes to different MCUs you can always reach out to me!
The course is strictly not bound to any 1 type of MCU. So, if you already have any Development board which runs with ARM-Cortex M3/M4 processor,
then I recommend you to continue using it.
But if you don’t have any Development board, then check out the below Development boards.
RiseTime offers "Job Oriented VLSI Design & Verification Course"
In this course, you will learn both ASIC design and verification concepts. Verilog is covered as part of design and systemVerilog/UVM are covered as part of verification. The course highlights are periodical tests followed by extensive lab sessions and mock interviews.
Level sensitive scan design(LSSD) and Boundry scan(BS)Praveen Kumar
This presentation contains,
Introduction,design for testability, scan chain, operation, scan structure, test vectors, Boundry scan, test logic, operation, BS cell, states of TAP controller, Boundry scan instructions.
Join this video course on udemy . Click here :
https://www.udemy.com/course/mastering-microcontroller-with-peripheral-driver-development/?couponCode=SLIDESHARE
In this course, the code is developed such a way that, It can be ported to any MCU you have at your hand.
If you need any help in porting these codes to different MCUs you can always reach out to me!
The course is strictly not bound to any 1 type of MCU. So, if you already have any Development board which runs with ARM-Cortex M3/M4 processor,
then I recommend you to continue using it.
But if you don’t have any Development board, then check out the below Development boards.
RiseTime offers "Job Oriented VLSI Design & Verification Course"
In this course, you will learn both ASIC design and verification concepts. Verilog is covered as part of design and systemVerilog/UVM are covered as part of verification. The course highlights are periodical tests followed by extensive lab sessions and mock interviews.
Taxpayers have the right to know the maximum amount of time they have to challenge the IRS’s position as well as the maximum amount of time the IRS has to audit a particular tax year or collect a tax debt. Taxpayers have the right to know when the IRS has finished an audit.
Privacy Preserving Log File Processing in Mobile Network EnvironmentShankar Lal
Noise addition for anonymisation is a known technique for increasing the privacy of a data sets. However this technique is often presented as individual and independent, or, just stated as techniques to be applied. This increases the danger of misapplication of these techniques and a resulting anonymised data set that is open to relatively easy re-identification or reconstruction. To better understand the application of these techniques we demonstrate their application to a specific domain - that of network trace anonymisation.
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORMsipij
In this paper the Delay Computation method for Common Sub expression Elimination algorithm is being implemented on Cyclotomic Fast Fourier Transform. The Common Sub Expression Elimination algorithm is combined with the delay computing method and is known as Gate Level Delay Computation with Common Sub expression Elimination Algorithm. Common sub expression elimination is effective
optimization method used to reduce adders in cyclotomic Fourier transform. The delay computing method is based on delay matrix and suitable for implementation with computers. The Gate level delay computation method is used to find critical path delay and it is analyzed on various finite field elements. The presented algorithm is established through a case study in Cyclotomic Fast Fourier Transform over finite field. If Cyclotomic Fast Fourier Transform is implemented directly then the system will have high additive complexities. So by using GLDC-CSE algorithm on cyclotomic fast Fourier transform, the additive
complexities will be reduced and also the area and area delay product will be reduced.
Optimization of Test Pattern Using Genetic Algorithm for Testing SRAMIJERA Editor
An optimization of test pattern for testing of a Static Random Access Memory (SRAM) using genetic algorithm
interconnects presented here is a method that associates a turn on inputs to numerous nets, which gives rise to
test vectors to determine stuck-at, open, and bridging faults. This set up gives us privilege in reducing
unnecessary composition that reduces the testing time for application-dependent testing for coverage of faults.
This optimized test pattern is used as a test source for testing a circuit and identifying the faults in the circuit.
The faults which are covered in are stuck at open and bridging faults. Genetic algorithm reduces the redundancy
and optimizes the test pattern which results in reduced testing time and power consumption
Buffer Allocation Problem is an important research issue in manufacturing system design.
Objective of this paper is to find optimum buffer allocation for closed queuing network with
multi servers at each node. Sum of buffers in closed queuing network is constant. Attempt is
made to find optimum number of pallets required to maximize throughput of manufacturing
system which has pre specified space for allocating pallets. Expanded Mean Value Analysis is
used to evaluate the performance of closed queuing network. Particle Swarm Optimization is
used as generative technique to optimize the buffer allocation. Numerical experiments are
shown to explain effectiveness of procedure
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
LDPC Encoding and Hamming Encoding using MATLAB.
An LDPC code is a linear block code characterised by a very sparse parity-check matrix. This means that the parity check matrix has a very low concentration of 1’s in it, hence the name is “low-density parity-check” code. The sparseness of LDPC codes is what as it can lead to excellent performance in terms of bit error rates.
Fpga based efficient multiplier for image processing applications using recur...VLSICS Design
The Digital Image processing applications like medical imaging, satellite imaging, Biometric trait images
etc., rely on multipliers to improve the quality of image. However, existing multiplication techniques
introduce errors in the output with consumption of more time, hence error free high speed multipliers has
to be designed. In this paper we propose FPGA based Recursive Error Free Mitchell Log Multiplier
(REFMLM) for image Filters. The 2x2 error free Mitchell log multiplier is designed with zero error by
introducing error correction term is used in higher order Karastuba-Ofman Multiplier (KOM)
Architectures. The higher order KOM multipliers is decomposed into number of lower order multipliers
using radix 2 till basic multiplier block of order 2x2 which is designed by error free Mitchell log multiplier.
The 8x8 REFMLM is tested for Gaussian filter to remove noise in fingerprint image. The Multiplier is
synthesized using Spartan 3 FPGA family device XC3S1500-5fg320. It is observed that the performance
parameters such as area utilization, speed, error and PSNR are better in the case of proposed architecture
compared to existing architectures.
A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...IAEME Publication
In this paper, MDL based reduction in frequent pattern is presented. The ideal outcome of any pattern mining process is to explore the data in new insights. And also, we need to eliminate the non-interesting patterns that describe noise. The major problem in frequent pattern mining is to identify the interesting patterns. Instead of performing association rule mining on all the frequent item sets, it is feasible to select a sub set of frequent item sets and perform the mining task. Selecting a small set of frequent item sets from large amount of interesting ones is a difficult task. In our approach, MDL based algorithm is used for reducing the number of frequent item sets to be used for association rule mining is presented.
Exploring Microoptimizations Using Tizen Code as an ExamplePVS-Studio
When talking about PVS-Studio's diagnostic capabilities in our articles, we usually leave out its recommendations about the use of microoptimizations in C and C++ code. These are not as crucial as diagnostics detecting bugs, of course, but they make an interesting subject for discussion as well.
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...Journal For Research
Low density parity check code is a linear block code. This code approaches the Shannon’s limit and having low decoding complexity. We have taken LDPC (Low Density Parity Check) code with ½ code rate as an error correcting code in digital video stream and studied the performance of LDPC code with BPSK modulation in AWGN (Additive White Gaussian Noise) channel with sum product algorithm and bit flipping algorithm. Finally the plot between bit error rates of the code with respect to SNR has been considered the output performance parameter of proposed methodology. BER are considered for different number of frames and different number of iterations. The performance of the sum product algorithm and bit flip algorithm are also com-pared. All simulation work has been implemented in MATLAB.
Analysis and Design of Algorithms (ADA): An In-depth Exploration
Introduction:
The field of computer science is heavily reliant on algorithms to solve complex problems efficiently. The analysis and design of algorithms (ADA) is a fundamental area of study that focuses on understanding and creating efficient algorithms. This comprehensive overview will delve into the various aspects of ADA, including its importance, key concepts, techniques, and applications.
Importance of ADA:
Efficient algorithms play a critical role in various domains, including software development, data analysis, artificial intelligence, and optimization. ADA provides the tools and techniques necessary to design algorithms that are both correct and efficient. By analyzing the performance characteristics of algorithms, ADA enables computer scientists and engineers to develop solutions that save time, resources, and computational power.
Key Concepts in ADA:
Correctness: ADA emphasizes the importance of designing algorithms that produce correct outputs for all possible inputs. Techniques like mathematical proofs and induction are used to establish the correctness of algorithms.
Complexity Analysis: ADA seeks to analyze the efficiency of algorithms by examining their time and space complexity. Time complexity measures the amount of time required by an algorithm to execute, while space complexity measures the amount of memory consumed.
Asymptotic Notations: ADA employs asymptotic notations, such as Big O, Omega, and Theta, to express the growth rates of functions and classify the efficiency of algorithms. These notations allow for a concise comparison of algorithmic performance.
Algorithm Design Paradigms: ADA explores various design paradigms, including divide and conquer, dynamic programming, greedy algorithms, and backtracking. Each paradigm offers a systematic approach to solving problems efficiently.
Techniques in ADA:
Divide and Conquer: This technique involves breaking down a problem into smaller subproblems, solving them independently, and combining the solutions to obtain the final result. Well-known algorithms like Merge Sort and Quick Sort utilize the divide and conquer approach.
Dynamic Programming: Dynamic programming breaks down a complex problem into a series of overlapping subproblems and solves them in a bottom-up manner. This technique optimizes efficiency by storing and reusing intermediate results. The Fibonacci sequence calculation is a classic example of dynamic programming.
Greedy Algorithms: Greedy algorithms make locally optimal choices at each step, with the hope of achieving a global optimal solution. These algorithms are efficient but may not always yield the best overall solution. The Huffman coding algorithm for data compression is a widely used example of a greedy algorithm.
Backtracking: Backtracking involves searching for a solution to a problem by incrementally building a solution and undoing the choices that lead to dead-ends.
Similar to techniques for removing smallest Stopping sets in LDPC codes (20)
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
techniques for removing smallest Stopping sets in LDPC codes
1. 1
Internship report
On techniques for removing smallest
Stopping sets in LDPC codes
Submitted to:
Vitaly Skachek
Senior Lecturer
Institute of computer Science
University of Tartu, Tartu Estonia
Submitted by:
Shankar Lal
Master’s student
Dept: of Electrical Engineering
Aalto University, Espoo Finland
2. 2
Table of content
LDPC codes and their importance ……………………………………….…….…...3
Stopping sets ….............................................................................................................4
Addition of redundant rows …………………………………………………………4
Previous work .......................................................................................................…....5
Our techniques……………………….……………………………………………….5
Comparison & Results………………………………………………………….........6
Future work……………………………………………………………………….…..7
References………………………………………………………………………..……7
Appendix……………………………………………………………………….….…..7
3. 3
LDPC codes and their importance:
Low-density parity-check (LDPC) codes are linear bock codes used for forward error correction.
The name comes from code’s characteristics of having small amount of 1’s as compare to 0’s in
the matrix. LDPC codes provide the performance which is near to capacity of most communication
channels.
LDPC codes were invented by Gallager in 1960’s during his PhD studies. But these codes didn’t
get much attention due to the unavailability of high performance encoder and decoder until about
15 years ago [1].
LDPC codes can be represented by two methods, one is by linear block codes via matrices of zeros
and ones and other is by graphs (Tanner/bipartite graph) which consist of check nodes and variable
nodes (or bit nodes). An edge joins the variable node to check node if that bit is present in
corresponding parity check equation. The number of bits in the Tanner graph is equal to number
of ones in parity check matrix [1]. An example of LDPC code is given in figure 1.1 below (taken
from the paper of [2]).
A LDPC code is called regular if number of 1’s in each column and rows is constant, if it’s not
constant then it is called irregular LDPC code.
To construct LDPC codes, there are many different algorithms available. There is one which
Gallager introduced, MacKay also suggested one algorithm which randomly generate sparse parity
check matrices. I have also used a software based on Unix/Linux designed by Canadian professor
Radford M. Neal [3], which randomly generates the LDPC codes given matrix size and number of
1’s in each row and column.
LDPC codes are used in a variety of applications in the communication and data storage systems,
the reason being they enable the effective decoding over noisy channel and provide better error
correction performance.
Figure 1.1: Example of LDPC code [2].
4. 4
Stopping sets:
Stopping set is commonly defined by the help of the Tanner graph. Here are its few definitions:
“A stopping set S is a subset of V, the set of variable nodes, such that all neighbors of S are
connected to S at least twice” [4] or in other words, “a stopping set is the set of variable nodes
whose induced graph contains no singly connected check nodes” [5].
The iterative decoding will be unsuccessful if the bits corresponding to a stopping set are erased.
The stopping set of small size has even worse effect on the performance of the decoder since it is
more likely that all bits in the stopping set are erased. Therefore, it is important to remove small
stopping sets. The stopping sets of small size are more dangerous because there is higher
probability that all bits in the set are erased, this is because the message passing decoder used with
LDPC codes cannot recover the transmitted information in a unique way.
Addition of redundant rows:
There are some techniques by which these small stopping sets can be removed. One of them is by
taking the linear combination of rows of the parity-check matrix H and adding them together to
generate redundant rows. These redundant rows, then can be appended to the original matrix to
remove the stopping sets of small size. In terms of Tanner graph, redundant rows enhance the code
graph by increasing the number of check nodes. Therefore, the redundant rows can eliminate the
stopping set S if that check nodes is connected to S exactly once. Since these redundant rows are
from linear combination of the actual code so it doesn’t change the code and hence there is no rate
penalty [5].
The other approaches include finding low weight individual redundant rows to eliminate big
number of small-sized stopping sets. One of them is greedy heuristics technique. By this technique,
minimum number of redundant rows are found by continuous experiments that can remove large
amount of stopping sets, later on these rows can be combined with PCM to eliminate all stopping
sets of small size.
There are many other ways to generate redundant rows and each methods have its pros and cons,
usually low weight redundant rows are considered to be good because of their ability to remove
many potential stopping sets.
The new rows can be either redundant or linearly independent; addition of redundant rows is good
in the sense that it doesn’t modify the code but it may require many redundant rows, conversely
addition of linearly independent rows changes the code a bit having small loss in code rate but it
generates the code with better performance of the decoder with addition of just few rows.
5. 5
Previous work:
There is lot of similar work that was done on this subject in past. One of the big efforts is made in
[6]. The authors explained that performance of linear code is effected by smallest stopping sets
and then they introduced many new terminologies like stopping distance, stopping redundancy.
They also suggested that adding redundant row to the parity check matrix can be very useful to
remove the smallest stopping sets.
In [5], the authors introduced a scheme of finding low weight redundant rows to be added to the
parity-check matrix in order to remove the smallest stopping sets. They first described the GARS
(Genie-aided random search) algorithms and its weakness of having high weight redundant rows
which increases the density of PCM and then they presented their method called LWRRS (Low
weight redundant rows search). The idea is to first generate the list of code words from parity-
check matrix H by enumeration procedure and then create generator matrix G from those code
words and then find the parity-checks. After that, these parity checks can be added to the H to
remove small stopping set of small size.
In [7], the authors have proposed new scheme which they claimed by their results that it provides
better efficiency and improved performance for Parity-check matrix extension algorithms. They
suggested that adding linearly independent rows to PCM can change the code a little but it forms
the better code by adding fewer new redundant rows. The basic idea was to identify the positions
of those columns numbers which are present in most of the stopping sets and then putting 1 in
those column indexes in the new row and the elements in other columns are set to zero. This
technique eliminates many potential stopping set but it has slight disadvantage that there are some
stopping set which are reactived by these new rows during elimination process.
Our techniques:
I used two techniques to remove smallest stopping sets in the parity-check matrix. In first
technique, the basic idea is to take linear combination of the rows of the original matrix to generate
the redundant rows and then appending those redundant rows to the PCM to eliminate small-sized
stopping sets. The redundant rows can be generated by multiplying either by 0 or 1 (based on
random decision of matlab code) with every row of the original matrix and then adding the
resultant row to previously generated rows and their sum produces new row. I generated 200
redundant rows for my experiment. The matlab code for generating redundant rows is given in
appendix. The matlab code for finding smallest stopping sets consists of nested for loops. The code
checks for every column of the matrix to find the smallest stopping set. The LDPC code used in
my experiment is (10, 20) regular code with matrix size of 30x60. This matrix was generated by
the help of Professor Radford M. Neal’s software [8]. This matrix is given in appendix. The
minimum stopping set in this code is of size 5. An example of matlab code for removal of stopping
set of size 5, 6 and 7 is also given in the appendix as well as code for generation of redundant rows
by linear combination. The matlab code for removal of stopping set follows a pattern and can be
extended to calculate stopping set of any size.
I ran many iteration of the matlab function to check that exactly how many redundant rows are
required to remove stopping set of 5, 6 and 7. The result is mentioned in section below.
6. 6
The other technique used is greedy heuristic technique. In this technique, I evaluated every
redundant row individually (taken from the matrix of 200 redundant rows generated in the previous
method) that how many stopping set it can remove. This was done by appending each individual
redundant row to the original matrix and calculating how many stopping sets it can remove. Then
the best rows which remove many potential smallest stopping sets are selected and appended to
the PCM to eliminate all small stopping set. This method requires fewer redundant rows and
therefore, it leads to more efficient performance. The matlab code for this greedy technique is
given in Appendix section.
Comparison & Results:
The first technique that I used requires many redundant rows as compare to latter one i.e. for
removing smallest stopping set of 5, it needed 13 redundant rows and for stopping set of 6 and 7,
18 and 76 redundant rows were required respectively. It may be because the redundant rows used
are randomly generated and they too contain these stopping sets therefore many redundant rows
were required for this purpose. On the other hand, in greedy technique, it took like 5 rows to
remove smallest stopping set of 5 and 9 and 50 redundant rows to remove stopping set of 6 and 7
respectively. The selected redundant rows were already known for removing many stopping sets
and hence small combination of such rows was enough to get rid of all small-sized stopping sets.
The result for both the techniques is also shown in the graph below:
Figure 1.2 Comparison of two techniques for removal of small-sized stopping set
5
9
50
13
18
76
0
10
20
30
40
50
60
70
80
Stopping set size
Comparsion of Two stopping set removal techniques
Linear Combination Technique Greedy Technique
No:ofredundantrows
6 75
9. 9
for m=1:n-(i+j+k+l)
if (Z(:,i)+Z(:,i+j)+Z(:,i+j+k)+Z(:,i+j+k+l)+Z(:,i+j+k+l+m)>=2)
|(Z(:,i)+Z(:,i+j)+Z(:,i+j+k)+Z(:,i+j+k+l)+Z(:,i+j+k+l+m)==0)
W=['stopping set of size five is in the column ' num2str(i), ' ' num2str(i+j)
' ' num2str(i+j+k) ' ' num2str(i+j+k+l) ' and ' num2str(i+j+k+l+m) ];
disp(W)
end
end
end
end
end
end
Matlab code for stopping set removal of size 6
n=size of the rows of the matrix;
for i=1:n-5
for j=1:(n-4)-i
for k=1:(n-3)-(i+j)
for l=1:(n-2)-(i+j+k)
for m=1:(n-1)-(i+j+k+l)
for o=1:n-(i+j+k+l+m)
if Z(:,i)+Z(:,i+j)+Z(:,i+j+k)+Z(:,i+j+k+l)+Z(:,i+j+k+l+m)+Z(:,i+j+k+l+m+o)>=2
| Z(:,i)+Z(:,i+j)+Z(:,i+j+k)+Z(:,i+j+k+l)+Z(:,i+j+k+l+m)+Z(:,i+j+k+l+m+o)==0
W=['stopping set of size six is in the column ' num2str(i), ' ' num2str(i+j)
' ' num2str(i+j+k) ' ' num2str(i+j+k+l) ' ' num2str(i+j+k+l+m) ' and '
num2str(i+j+k+l+m+o)];
disp(W)
end
end
end
end
end
end
10. 10
end
Matlab code for stopping set removal of size 7
n=size of the rows of the matrix;
for i=1:n-6
for j=1:(n-5)-i
for k=1:(n-4)-(i+j)
for l=1:(n-3)-(i+j+k)
for m=1:(n-2)-(i+j+k+l)
for o=1:(n-1)-(i+j+k+l+m)
for p=1:(n-1)-(i+j+k+l+m+o)
if Z(:,i)+Z(:,i+j)+Z(:,i+j+k)+Z(:,i+j+k+l)+Z(:,i+j+k+l+m)+Z(:,i+j+k+l+m+o)
+Z(:,i+j+k+l+m+o+p)>=2|Z(:,i)+Z(:,i+j)+Z(:,i+j+k)+Z(:,i+j+k+l)+Z(:,i+j+k+l+m)
+Z(:,i+j+k+l+m+o) +Z(:,i+j+k+l+m+o+p)==0
W=['stopping set of size seven is in the column ' num2str(i), ' '
num2str(i+j) ' ' num2str(i+j+k) ' ' num2str(i+j+k+l) ' ' num2str(i+j+k+l+m) '
' num2str(i+j+k+l+m+o) ' and ' num2str(i+j+k+l+m+o+p)];
disp(W)
end
end
end
end
end
end
end
end
Code for generating 200 redundant rows by linear combination
s=[0];
for i=1:200
for j=1:30;
s=mod(s+randint(1,1)*H(j,:),2);
11. 11
end
Y(i,:)=s;
s=[0];
end
Code for greedy heuristic technique
Z=[H;Y(80,:)]; /* Adding every individual row from matrix Y to original matrix
and check how many stopping set it removes. Note: Y is the matrix of already
generated redundant rows */
X=[(Y(2:2,:)); (Y(5:5,:)); (Y(7:7,:)); (Y(11:11,:)); (Y(15:15,:));
(Y(17:17,:))]; /* After knowing good rows from Y, they can be combined to form a
matrix X */
Z=[H;X]; /* Matrix X is then combined to original parity-check matrix H to from
matrix Z which can remove all stopping sets */