This document discusses error correction codes used in computer memory. It begins by describing the two main types of computer memory: read-only memory (ROM) and random access memory (RAM). It then discusses error correction codes, which are used to protect memory from soft errors. Majority logic decoding is introduced as a simple decoding method for some error correction codes. However, majority logic decoding can be slow for large memory sizes. The document goes on to propose an accelerated majority logic decoding method for difference-set low density parity check codes that relies on the first few iterations of decoding to detect errors, improving decoding speed. It also discusses extending this approach to Euclidean geometry low density parity check codes.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...IJCSEA Journal
Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data.The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than arithmetic coding. In addition, implementation of Huffman coding is much easier than the arithmetic coding.
SELF CORRECTING MEMORY DESIGN FOR FAULT FREE CODING IN PROGRESSIVE DATA STREA...VLSICS Design
Fault diagnosis in processing digital system application has raised various limiting problems. While basic objective of fault tolerant systems is to minimize the fault occurring in the device, the processing error is an additional error to be considered. Past approaches were observed to be focusing much on internal fault in digital device, the error due to processing and communication is to be developed. In this paper a self correcting approach to memory design based on memory interface is proposed. The error approach observed in case of forwarding binary data to encode, store and retrieve with error free coding is proposed. The Process of memory error free coding results in higher reliability in case of bit and block coding.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...IJCSEA Journal
Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data.The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than arithmetic coding. In addition, implementation of Huffman coding is much easier than the arithmetic coding.
SELF CORRECTING MEMORY DESIGN FOR FAULT FREE CODING IN PROGRESSIVE DATA STREA...VLSICS Design
Fault diagnosis in processing digital system application has raised various limiting problems. While basic objective of fault tolerant systems is to minimize the fault occurring in the device, the processing error is an additional error to be considered. Past approaches were observed to be focusing much on internal fault in digital device, the error due to processing and communication is to be developed. In this paper a self correcting approach to memory design based on memory interface is proposed. The error approach observed in case of forwarding binary data to encode, store and retrieve with error free coding is proposed. The Process of memory error free coding results in higher reliability in case of bit and block coding.
Convolutional encoding with Viterbi decoding is a good forward error correction technique suitable for channels affected by noise degradation. Fangled Viterbi decoders are variants of Viterbi decoder (VD) which decodes quicker and takes less memory with no error detection capability. Modified fangled takes it a step further by gaining one bit error correction and detection capability at the cost of doubling the computational complexity and processing time. A new efficient fangled Viterbi algorithm is proposed in this paper with less complexity and processing time along with 2 bit error correction capabilities. For 1 bit error correction for 14 bit input data, when compared with Modified fangled Viterbi decoder, computational complexity has come down by 36-43% and processing delay was halved. For a 2 bit error correction, when compared with Modified fangled decoder computational complexity decreased by 22-36%.
Convolutional encoding with Viterbi decoding is a good forward error correction technique suitable for channels affected by noise degradation. Fangled Viterbi decoders are variants of Viterbi decoder (VD) which decodes quicker and takes less memory with no error detection capability. Modified fangled takes it a step further by gaining one bit error correction and detection capability at the cost of doubling the computational complexity and processing time. A new efficient fangled Viterbi algorithm is proposed in this paper with less complexity and processing time along with 2 bit error correction capabilities. For 1 bit error correction for 14 bit input data, when compared with Modified fangled Viterbi decoder, computational complexity has come down by 36-43% and processing delay was halved. For a 2 bit error correction, when compared with Modified fangled decoder computational complexity decreased by 22-36%.
Design and implementation of single bit error correction linear block code sy...TELKOMNIKA JOURNAL
Linear block code (LBC) is an error detection and correction code that is widely used in
communication systems. In this paper a special type of LBC called Hamming code was implemented and
debugged using FPGA kit with integrated software environments ISE for simulation and tests the results of
the hardware system. The implemented system has the ability to correct single bit error and detect two bits
error. The data segments length was considered to give high reliability to the system and make an
aggregation between the speed of processing and the hardware ability to be implemented. An adaptive
length of input data has been consider, up to 248 bits of information can be handled using Spartan 3E500
with 43% as a maximum slices utilization. Input/output data buses in FPGA have been customized to meet
the requirements where 34% of input/output resources have been used as maximum ratio. The overall
hardware design can be considerable to give an optimum hardware size for the suitable information rate.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...Journal For Research
Low density parity check code is a linear block code. This code approaches the Shannon’s limit and having low decoding complexity. We have taken LDPC (Low Density Parity Check) code with ½ code rate as an error correcting code in digital video stream and studied the performance of LDPC code with BPSK modulation in AWGN (Additive White Gaussian Noise) channel with sum product algorithm and bit flipping algorithm. Finally the plot between bit error rates of the code with respect to SNR has been considered the output performance parameter of proposed methodology. BER are considered for different number of frames and different number of iterations. The performance of the sum product algorithm and bit flip algorithm are also com-pared. All simulation work has been implemented in MATLAB.
Error Detection and Correction in SRAM Cell Using Decimal Matrix Codeiosrjce
Error Correction Codes (ECCs) are commonly used to protect memories from soft errors. As
technology scales, Multiple Cell Upsets (MCUs) become more common and affect a larger number of cells. To
prevent the occurrence of MCUs several error correction codes (ECCs) are used, but the main problem is that
they require complex encoder and decoder architecture and higher delay overheads. The decimal matrix code
(DMC) minimizes the area and delay overheads compared to the existing codes such as hamming, RS codes and
also improves the memory reliability by enhancing the error correction capability. In this paper, novel
decimal matrix code (DMC) based on divide-symbol is proposed to enhance memory reliability with
lower delay overhead. The proposed DMC utilizes decimal algorithm to obtain the maximum error
detection capability. Moreover, the encoder-reuse technique (ERT) is proposed to minimize the area
overhead of extra circuits without disturbing the whole encoding and decoding processes. ERT uses DMC encoder itself to be part of the decoder.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
Design, Analysis and Implementation of Modified Luby Transform CodeIOSR Journals
Abstract : Bit losses in erasure channels like computer networks are of great concern. The existing methods to combat bit losses are either inefficient or time consuming due to the retransmission protocols involved. Through this paper, we propose a Modified Luby Transform (MLT) coding scheme to efficiently transmit data over live computer networks. The MLT code can combat bit losses as well as eliminate the need for retransmission. The usability and reliability of the proposed MLT code is verified by testing it on a live computer network. Keywords : Erasure channel, Fountain Codes, Luby Transform Codes , Wired Networks, Wireless Networks
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMVLSICS Design
Low Power is an extremely important issue for future mobile communication systems; The focus of this paper is to implementat turbo codes for low power solutions. The effect on performance due to variation in parameters like frame length, number of iterations, type of encoding scheme and type of the interleave in the presence of additive white Gaussian noise is studied with the floating point model. In order to obtain the effect of quantization and word length variation, a fixed point model of the application is also developed.. The application performance measure, namely bit-error rate (BER) is used as a design constraint while optimizing for power and area coverage. Low power Optimization is Performed on Implementation levels by the use of Voltage scaling. With those Techniques we can reduced the power 98.5%and Area(LUT) is 57% and speed grade is Increased .This type of Power maneger is proposed and implemented based on the timing details of the turbo decoder in the VHDL model.
Survey on Error Control Coding TechniquesIJTET Journal
Abstract - Error Control Coding techniques used to ensure that the information received is correct and has not been corrupted, owing to the environmental defects and noises occurring during transmission or the data read operation from Memory. Environmental interference and physical effects defects in the communication medium can cause random bit errors during data transmission. While, data corruption means that the detection and correction of bytes by applying modern coding techniques. Error control coding divided into automatic repeat request (ARQ) and forward error correction (FEC).First of all, In ARQ, when the receiver detects an error in the receiver; it requests back the sender to retransmit the data. Second, FEC deals with system of adding redundant data in a message and also it can be recovered by a receiver even when a number of errors were introduce either during the process of data transmission, or on the storage. Therefore, error detection and correction of burst errors can be obtained by Reed-Solomon code. Moreover, the Low-Density Parity Check code furnishes outstanding performance that sparingly near to the Shannon limit.
In the world of technology is already integrated into the network must have a data transmission process. Sending and receiving data communications systems do not avoid mistakes. Packets of data sent from the server to the client computer always have an error in transmission. These shipments have leaks that occur due to changes in voltage, frequency or impact. One of the methods used to detect and correct errors in data transmission is the Hamming method. This method will check bit errors in delivery. Hamming is to do the process at fault detection, and then the error will be corrected so that the arrangement of the bits will go back to the bit sequence before the data is sent. With the application of this method, the data transmission process will avoid mistakes. Data will be saved to the destination.
Design & Check Cyclic Redundancy Code using VERILOG HDLijsrd.com
the CRC or cyclic redundancy check is a widely used technique for error checking in many protocols used in data transmission. The aim of this project is to design the CRC RTL generator or a tool that calculates the CRC equations for the given CRC polynomials and generates the Verilog RTL code .This block deals with the calculation of equations for standard polynomials like CRC-4, CRC-8, CRC-16, CRC-32 and CRC-48, CRC-64 and also user defined proprietary polynomial. To use PERL as the platform it also aims at having a simpler user interface. To generate the RTLs for any data width and for any standard polynomial or user defined polynomial, this design aims to be complete generic. The RTLs generated by this tool are verified by System Verilog constrained random testing to make it more robust and reliable.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
block diagram and signal flow graph representation
Iaetsd efficient majority logic fault detection with
1. Abstract: Electronic space provided by silicon chips
(semiconductor memory chips) or magnetic/optical media as
temporary or permanent storage for data and/or instructions
to control a computer or execute one or more programs.
Two main types of computer memory are: (1) Read only
memory (ROM), smaller part of a computer's silicon (solid
state) memory that is fixed in size and permanently stores
manufacturer's instructions to run the computer when it is
switched on. (2) Random access memory (RAM), larger
part of a computer's memory comprising of hard disk, CD,
DVD, floppies etc., (together called secondary storage) and
employed in running programs and in archiving of data.
Memory chips provide access to stored data or instructions
that is hundreds of times faster than that provided by
secondary storage.
Index Terms: Error correction codes, Euclidean geometry
low-density parity check (EG-LDPC) codes, majority logic
decoding, memory
I. INTRODUCTION
Error correction codes are commonly used to protect
memories from so-called soft errors, which change the
logical value of memory cells without damaging the circuit.
As technology scales, memory devices become larger and
more powerful error correction codes are needed. To this
end, the use of more advanced codes has been recently
proposed. These codes can correct a larger number of errors,
but generally require complex decoders. To avoid a high
decoding complexity, the use of one step majority logic
decodable codes was first proposed in for memory
applications. Further work on this topic was then presented
in. One step majority logic decoding can be implemented
serially with very simple circuitry, but requires long
decoding times. In a memory, this would increase the access
time which is an important system parameter. Only a few
classes of codes can be decoded using one step majority
logic decoding. Among those is some Euclidean geometry
low density parity check (EG-LDPC) codes which were
used in, and difference set low density parity check (DS-
LDPC) codes.
A method was recently proposed to accelerate a serial
implementation of majority logic decoding of DS-LDPC
codes. The idea behind the method is to use the first
iterations of majority logic decoding to detect if the word
being decoded contains errors. If there are no errors, then
decoding can be stopped without completing the remaining
iterations, therefore greatly reducing the decoding time.
For a code with block length N, majority logic decoding
(when implemented serially) requires N iterations, so that as
the code size grows, so does the decoding time. In the
proposed approach, only the first three iterations are used to
detect errors, thereby achieving a large speed increase when
N is large. It was shown that for DS-LDPC codes, all error
combinations of up to five errors can be detected in the first
three iterations. Also, errors affecting more than five bits
were detected with a probability very close to one. The
probability of undetected errors was also found to decrease
as the code block length increased. For a billion error
patterns only a few errors (or sometimes none) were
undetected. This may be sufficient for some applications.
TABLE I
ONE STEP MLD EG-LDPC CODES
Another advantage of the proposed method is that it
requires very little additional circuitry as the decoding
circuitry is also used for error detection. For example, it was
shown that the additional area required to implement the
scheme was only around 1% for large word sizes.
The proposed method that relies on the properties of
DS-LDPC codes and therefore it is not directly applicable to
other code classes. In the following, a similar approach for
EG-LDPC codes is presented.
The rest of this brief is divided into the following
sections. Section II provides Existing System. Section III
presents the Proposed System (Enhanced MLDD). Section
IV Presents the Results and Analysis and Finally Section V
gives the conclusion and Future work of this paper.
II. EXISTING SYSTEM
This section deals with the existing decoding
methodologies used for error detection. In error detection
and correction, majority logic decoding is a method to
Efficient Majority Logic Fault Detection With
Difference-Set Codes for Memory Applications
N.Muralikrishna yadav1
, PG Student, Department of ECE, ASCET, Gudur, Andhra Pradesh, India.
Email: muralikrishnayadav.nethi@gmail.com
K. Dhanunjaya2
, Head of the Department, Department of ECE, ASCET, Gudur, Andhra Pradesh,
India. Email: hod.ece@audisankara.com
Proceedings of International Conference on Advances in Engineering and Technology
www.iaetsd.in
ISBN : 978 - 1505606395
International Association of Engineering and Technology for Skill Development
72
2. decode repetition codes, based on the assumption that the
largest number of occurrences of a symbol was the
transmitted symbol. Majority logic decoder is based on a
number of parity check equations which are orthogonal to
each other. So the majority result of these parity check
equations decide the correctness of the current bit under
decoding.
A. One Step Majority Logic Decoder
As described in earlier, Majority-logic decoder is a
simple and effective decoder capable of correcting multiple
bit flips depending on the number of parity checksum
equations. It consists of four parts: 1) a cyclic shift register;
2) an XOR matrix; 3) a majority gate; 4) an EXOR gate for
error correction, as illustrated in Fig.1.
Fig.1. One step Majority Logic Decoder for (15, 7) EG-
LDPC Codes
In one step majority logic decoding, initially the code
word is loaded into the cyclic shift register. Then the check
equations are computed. The resulting sums are then
forwarded to the majority gate for evaluating its correctness.
If the number of 1’s received in is greater than the number
of 0’s which means that the current bit under decoding is
wrong, and a signal to correct it would be triggered.
Otherwise the bit under decoding is correct and no extra
operations would be needed on it. In next, the content of the
registers are rotated and the above procedure is repeated
until codeword bits have been processed. Finally, the parity
check sums should be zero if the codeword has been
correctly decoded. In this process, each bit may be corrected
only once. As a result, the decoding circuitry is simple, but
it requires a long decoding time if the code word is large.
Thus, by one-step majority-logic decoding, the code is
capable of correcting any error pattern with two or fewer
errors. For example, for a code word of 15-bits, the
decoding would take 15 cycles, which would be excessive
for most applications.
B. Majority Logic Decoder/Detector (MLDD)
In order to overcome the drawback of MLD method,
majority logic decoder/detector was proposed, in which the
majority logic decoder itself act as a fault detector. In
general, the decoding algorithm is still the same as the
majority logic decoder. The difference is that instead of
decoding all codeword bits, the MLDD method stops
intermediately in the third cycle, which can able to detect up
to five bit flips in three decoding cycles. So the number of
decoding cycles can be reduced to get improved
performance. The schematic of majority logic
decoder/detector is illustrated in Fig.2.
Fig.2. Schematic of Majority Logic Decoder/Detector
(MLDD)
Initially the code word is stored into the cyclic shift
register and it shifted through all the taps. The intermediate
values in each tap are given to the XOR matrix to perform
the checksum equations. The resulting sums are then
forwarded to the majority gate for evaluating its correctness.
If the number of 1’s received is greater than the number of
0’s which would mean that the current bit under decoding is
wrong, so it move on the decoding process. Otherwise, the
bit under decoding would be correct and no extra operations
would be needed on it. Decoding process involving the
operation of the content of the registers is rotated and the
above procedure is repeated and it stops intermediately in
the third cycle. If in the first three cycles of the decoding
process, the evaluation of the XOR matrix for all is “0,” the
code word is determined to be error-free and forwarded
directly to the output. If the error contains in any of the three
cycles at least a “1,” it would continue the whole decoding
process in order to eliminate the errors. Finally, the parity
check sums should be zero if the code word has been
correctly decoded. In conclusion the MLDD method is used
to detect the five bit errors and correct four bit errors
effectively. If the code word contain more than five bit
error, it produces the output but it did not show the errors
presented in the input. This type of error is called the silent
data error. Drawback of this method is did not detecting the
silent data error and it consuming the area of the majority
gate. The schematic for this memory system is shown in
Fig.3. It is very similar to the one shown in fig.1;
additionally the control unit was added in the MLDD
module to manage the decoding process (to detect the error).
Proceedings of International Conference on Advances in Engineering and Technology
www.iaetsd.in
ISBN : 978 - 1505606395
International Association of Engineering and Technology for Skill Development
73
3. Fig.3. Schematic of memory system with MLDD
Overall operation of the MLDD is illustrated in Fig.4.
Fig.4. MLDD Algorithm
III PROPOSED SYSTEM (ENHANCED MLDD)
This section presents an enhanced version of the ML
decoder/detector that improves the designs presented before,
by detecting the silent data error. Memory schematic of an
enhanced MLDD is illustrated in Fig.5.
Fig.5. Memory schematic of an Enhanced MLDD
The data words are initially encoded and then the
codeword is stored in the memory. When the memory is
read, the codeword is then fed through the enhanced MLDD
before sent to the output for further processing. The code
word contains message bits and parity or redundant bits.
The code efficiency is defined as the ratio of message bits to
the number of transmitted bits per block. The silent data
error detection using enhanced MLDD algorithm performs
the decoding as in the MLDD with some modifications.
When the MLDD having more than 5 errors will be detected
and corrected by the enhanced MLDD method. The MLDD
is used the control unit for detecting the error. If it has any
error in this iteration it will be perform with the modified
algorithm is illustrated in Fig.6. It is used to avoid silent
data corruption of the MLDD output. This would increase
the error detection capabilities at the expense of the error-
correction capabilities. In this algorithm up to four errors
will be done as in the MLDD algorithm. If it has more than
four errors will detected by after third iteration. Then
correction will be done by after nth iteration.
Fig.6. Enhanced MLDD algorithm
A. Sorting network
A sorting network is an abstract mathematical model of
a network of wires and comparator modules that is used to
sort a sequence of numbers. Each comparator connects two
wires and sorts the values by outputting the smaller value to
one wire, and the larger to the other. The main difference
between sorting networks and comparison sorting
algorithms is that with a sorting network the sequence of
comparisons is set in advance, regardless of the outcome of
previous comparisons. This independence of comparison
sequences is useful for parallel execution of the algorithms.
Proceedings of International Conference on Advances in Engineering and Technology
www.iaetsd.in
ISBN : 978 - 1505606395
International Association of Engineering and Technology for Skill Development
74
4. Fig.7 (a): Comparator circuit
A sorting network consists of wires and comparators
that will correctly sort all possible inputs into ascending
order. . So it used to reducing the gates and their
interconnections of the majority gate. Each wire carries with
it a value, and each comparator takes two wires as input and
output. When two values enter a comparator, the comparator
emits the lower value from the top wire, and the higher
value from the bottom wire. Using sorting network number
of gates reduced in the majority gate. Initially it compares
the inputs using comparator circuit. Comparator consist of
AND gate and then OR gate for selecting maximum and
minimum value shown in Fig.7 (a). OR gate producing
maximum value will be placed in top of the wire and the
AND gate producing minimum value will be placed in
bottom of the wire in the comparator circuit.
Fig.7 (b): 2-bit sorter
Each of the vertical lines represents one comparator
which compares two bits and assigns the larger one to the
top output and the smaller one to the bottom. Those value
given to the AND gate for getting the minimum value and
given to the OR gate for selecting the maximum value
shown in Fig.7 (b).
IV.RESULTS AND ANALYSIS
4.1 Simulation Results:
The behavioral simulation and post rout simulations
waveforms for the fault secure encoder is shown in Fig.8
and Fig.9. In the Fig.8,the input is information vector and
output is the detector output d which detects the errors in
the encoder. First information vector is given to encoder it
gives encoded vector as an output which is n-bit length.
This encoded vector is given as input to the detector. Any
error is present in encoded vector the detector output is ‘1’.
If it is ‘0’ encoded codeword is correct.
Fig.8.Behavioral simulation waveform for the fault
secure encoder
Fig.9.Post route simulation waveform for the fault
secure encoder
The behavioral simulation and post route simulation
waveforms for the fault secure memory system is shown in
Fig.10 and Fig.11. In Fig.10 inputs are I (information
vector), clk, wen(write enable), ren(read enable), and e
(error vector) to introduce an error. In this the encoded word
is given to the memory for this if ‘wen’ is ‘1’(high) data is
write into memory in a perticular address, here address line
is the information vector. If ‘ren’ is high data is read and
given as an output of memory. The memory output is
combination of coded vector and error vector. This memory
output is given as an input to the corrector which corrects
the coded word. This corrected coded word is given to the
detector to check whether coded word is correct or not.At
the corrector side detector sinal is ‘md’.
Fig.10.Behavioral simulation waveform for the fault
secure memory system
Proceedings of International Conference on Advances in Engineering and Technology
www.iaetsd.in
ISBN : 978 - 1505606395
International Association of Engineering and Technology for Skill Development
75
5. Fig.11.Post route simulation waveform for the fault
secure memory system
Table II
Design Implementation summary for fault secure
memory system
Timing summary
Minimum period: 3.516ns (Maximum Frequency:
284.414MHz)
Minimum input arrival time before clock: 4.711ns
Maximum output required time after clock: 55.255ns
Maximum combinational path delay: 55.733ns
4.2 RTL Schematic
In integrated circuit design, register transfer level (RTL)
description is a way of describing the operation of a
synchronous digital circuit. In RTL design, a circuit's
behavior is defined in terms of the flow of signals (or
transfer of data) between hardware registers, and the logical
operations performed on those signals.
After the HDL synthesis phase of the synthesis process,
use the RTL Viewer to view a schematic representation of
the pre-optimized design in terms of generic symbols that
are independent of the targeted Xilinx device, for example,
in terms of adders, multipliers, counters, AND gates, and
OR gates. The RTL schematic for the Fault secure encoder
generated by the Xilinx Synthesis tool is shown in Fig.12
below.
Fig.12.RTL Schematic for Fault secure encoder
The RTL schematic for the memory generated by the
Xilinx Synthesis tool is shown in Fig.13 below.
Fig.13.RTL Schematic for memory
The RTL schematic for the Fault secure memory
system generated by the Xilinx Synthesis tool is shown in
Fig.14 below.
Fig.14.RTL Schematic for Fault secure memory
system
4.3 Technology schematic:
The technology schematic for the Fault secure memory
system generated by the Xilinx Synthesis tool is shown in
Fig.15 below.
Proceedings of International Conference on Advances in Engineering and Technology
www.iaetsd.in
ISBN : 978 - 1505606395
International Association of Engineering and Technology for Skill Development
76
6. Fig.15.Technology schematic for fault secure encoder
and decoder for memory
4.4 Floor plan of an fault secure encoder and decoder for
memory:
The floor plan for the Fault secure memory system
generated by the Xilinx Synthesis tool is shown in Fig.16
below.
V.CONCLUSION AND FUTURE SCOPE
5.1. Conclusion:
In this project FPGA implementations of fault secure
encoder and decoder for memory applications. Using this
architecture tolerates transient faults both in the storage unit
and in the supporting logic (i.e., encoder, decoder
(corrector), and detector circuitries). The main advantage of
the proposed architecture is using this detect-and-repeat
technique we can correct potential transient errors in the
encoder or corrector output and provide fault-tolerant
memory system with fault-tolerant supporting circuitry. And
also takes less area compared to other ecc techniques and in
this architecture there is no need of decoder because we use
systematic generated matrix.
5.2. Future work:
Fault secure encoder and decoder for memory
applications is to protect the memory and supporting logic
from soft errors. The proposed architecture tolerates
transient faults both in the storage unit and in the supporting
logic. Scope for further work is instead of memory we use
nano memory which provides smaller, faster, and lower
energy devices which allow more powerful and compact
circuitry.
VI. REFERENCES
[1] Pedro Reviriego, Juan A. Maestro, and Mark F.
Flanagan, “Error Detection in Majority Logic Decoding of
Euclidean Geometry Low Density Parity Check (EG-LDPC)
Codes”, IEEE Transactions on Very Large Scale Integration
(VLSI) Systems, Vol. 21, No. 1, January 2013.
[2] R. C. Baumann, “Radiation-induced soft errors in
advanced semiconductor technologies,” IEEE Trans. Device
Mater. Reliab, vol. 5, no. 3, pp. 301–316, Sep. 2005.
[3] M. A. Bajura, Y. Boulghassoul, R. Naseer, S. DasGupta,
A. F.Witulski, J. Sondeen, S. D. Stansberry, J. Draper, L.
W. Massengill, and J. N. Damoulakis, “Models and
algorithmic limits for an ECC-based approach to hardening
sub-100-nm SRAMs,” IEEE Trans. Nucl. Sci., vol. 54, no.
4, pp. 935–945, Aug. 2007.
[4] R. Naseer and J. Draper, “DEC ECC design to improve
memory reliability in sub-100 nm technologies,” Proc. IEEE
ICECS, pp. 586–589, 2008.
[5] S. Ghosh and P. D. Lincoln, “Dynamic low-density
parity check codes for fault-tolerant nano-scale memory,”
presented at the Foundations Nanosci. (FNANO), Snowbird,
Utah, 2007.
[6] S. Ghosh and P. D. Lincoln, “Low-density parity check
codes for error correction in nano-scale memory,” SRI
Computer Science Lab., Menlo Park, CA, Tech. Rep. CSL-
0703, 2007.
[7] H. Naeimi and A. DeHon, “Fault secure encoder and
decoder for memory applications,” in Proc. IEEE Int. Symp.
Defect Fault Toler. VLSI Syst., 2007, pp. 409–417.
Proceedings of International Conference on Advances in Engineering and Technology
www.iaetsd.in
ISBN : 978 - 1505606395
International Association of Engineering and Technology for Skill Development
77
7. [8] B. Vasic and S. K. Chilappagari, “An information
theoretical framework for analysis and design of nano-scale
fault-tolerant memories based on low-density parity-check
codes,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 54,
no. 11, pp. 2438–2446, Nov. 2007.
[9] H. Naeimi and A. DeHon, “Fault secure encoder and
decoder for Nano-memory applications,” IEEE Trans. Very
Large Scale Integr. (VLSI) Syst., vol. 17, no. 4, pp. 473–
486, Apr. 2009.
[10] S. Lin and D. J. Costello, Error Control Coding, 2nd
ed. Englewood Cliffs, NJ: Prentice-Hall, 2004.
[11] S. Liu, P. Reviriego, and J. Maestro, “Efficient
majority logic fault detection with difference-set codes for
memory applications,” IEEE Trans. Very Large Scale
Integr. (VLSI) Syst., vol. 20, no. 1, pp. 148–156, Jan. 2012.
[12] H. Tang, J. Xu, S. Lin, and K. A. S. Abdel-Ghaffar,
“Codes on finite geometries,” IEEE Trans. Inf. Theory, vol.
51, no. 2, pp. 572–596, Feb. 2005.
Proceedings of International Conference on Advances in Engineering and Technology
www.iaetsd.in
ISBN : 978 - 1505606395
International Association of Engineering and Technology for Skill Development
78