The document summarizes a proposed architecture that uses Butterfly Weight Accumulators (BWAs) to improve the latency and complexity of computing the Hamming distance when comparing data encoded with error correcting codes. Specifically, it recognizes that error correcting codes are often represented in a systematic form where the data and parity parts are separated. The proposed architecture leverages this by starting the data comparison before fully encoding the parity bits, reducing latency. It also uses BWAs as processing elements that can compute Hamming distance faster than previous saturating adder-based approaches. The architecture is analyzed theoretically and shown to reduce latency and complexity compared to prior direct compare methods.
An Efficient Interpolation-Based Chase BCH Decoderijsrd.com
Error correction codes are the codes used to correct the errors occurred during the transmission of the data in the unreliable communication mediums. The idea behind these codes is to add redundancy bits to the data being transmitted so that even if some errors occur due to noise in the channel, the data can be correctly received at the destination end. Bose,Ray Chaudhuri, Hocquenghem (BCH)codes are one of the error correcting codes. The BCH decoder consists of four blocks namely syndrome block, chien search block and error correction block. This paper describes a new method for error detection in syndrome and chien search block of BCH decoder. The proposed syndrome block is used to reduce the number of computation by calculating the even number syndromes from the corresponding odd number syndromes.
An efficient model for design of 64-bit High Speed Parallel Prefix VLSI adderIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Tail-biting convolutional codes (TBCC) have been extensively applied in communication systems. This method is implemented by replacing the fixedtail with tail-biting data. This concept is needed to achieve an effective decoding computation. Unfortunately, it makes the decoding computation becomes more complex. Hence, several algorithms have been developed to overcome this issue in which most of them are implemented iteratively with uncertain number of iteration. In this paper, we propose a VLSI architecture to implement our proposed reversed-trellis TBCC (RT-TBCC) algorithm. This algorithm is designed by modifying direct-terminating maximumlikelihood (ML) decoding process to achieve better correction rate. The purpose is to offer an alternative solution for tail-biting convolutional code decoding process with less number of computation compared to the existing solution. The proposed architecture has been evaluated for LTE standard and it significantly reduces the computational time and resources compared to the existing direct-terminating ML decoder. For evaluations on functionality and Bit Error Rate (BER) analysis, several simulations, System-on-Chip (SoC) implementation and synthesis in FPGA are performed.
Design and implementation of log domain decoder IJECEIAES
Low-Density-Parity-Check (LDPC) code has become famous in communications systems for error correction, as an advantage of the robust performance in correcting errors and the ability to meet all the requirements of the 5G system. However, the mot challenge faced researchers is the hardware implementation, because of higher complexity and long run-time. In this paper, an efficient and optimum design for log domain decoder has been implemented using Xilinx system generator with FPGA device Kintex7 (XC7K325T-2FFG900C). Results confirm that the proposed decoder gives a Bit Error Rate (BER) very closed to theory calculations which illustrate that this decoder is suitable for next generation demand which needs a high data rate with very low BER.
An Efficient Interpolation-Based Chase BCH Decoderijsrd.com
Error correction codes are the codes used to correct the errors occurred during the transmission of the data in the unreliable communication mediums. The idea behind these codes is to add redundancy bits to the data being transmitted so that even if some errors occur due to noise in the channel, the data can be correctly received at the destination end. Bose,Ray Chaudhuri, Hocquenghem (BCH)codes are one of the error correcting codes. The BCH decoder consists of four blocks namely syndrome block, chien search block and error correction block. This paper describes a new method for error detection in syndrome and chien search block of BCH decoder. The proposed syndrome block is used to reduce the number of computation by calculating the even number syndromes from the corresponding odd number syndromes.
An efficient model for design of 64-bit High Speed Parallel Prefix VLSI adderIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Tail-biting convolutional codes (TBCC) have been extensively applied in communication systems. This method is implemented by replacing the fixedtail with tail-biting data. This concept is needed to achieve an effective decoding computation. Unfortunately, it makes the decoding computation becomes more complex. Hence, several algorithms have been developed to overcome this issue in which most of them are implemented iteratively with uncertain number of iteration. In this paper, we propose a VLSI architecture to implement our proposed reversed-trellis TBCC (RT-TBCC) algorithm. This algorithm is designed by modifying direct-terminating maximumlikelihood (ML) decoding process to achieve better correction rate. The purpose is to offer an alternative solution for tail-biting convolutional code decoding process with less number of computation compared to the existing solution. The proposed architecture has been evaluated for LTE standard and it significantly reduces the computational time and resources compared to the existing direct-terminating ML decoder. For evaluations on functionality and Bit Error Rate (BER) analysis, several simulations, System-on-Chip (SoC) implementation and synthesis in FPGA are performed.
Design and implementation of log domain decoder IJECEIAES
Low-Density-Parity-Check (LDPC) code has become famous in communications systems for error correction, as an advantage of the robust performance in correcting errors and the ability to meet all the requirements of the 5G system. However, the mot challenge faced researchers is the hardware implementation, because of higher complexity and long run-time. In this paper, an efficient and optimum design for log domain decoder has been implemented using Xilinx system generator with FPGA device Kintex7 (XC7K325T-2FFG900C). Results confirm that the proposed decoder gives a Bit Error Rate (BER) very closed to theory calculations which illustrate that this decoder is suitable for next generation demand which needs a high data rate with very low BER.
Finding the shortest path in a graph and its visualization using C# and WPF IJECEIAES
The shortest path problem is a classic problem in mathematics and computer science with applications in Economics (sequential decision making, analysis of social networks, etc.). The presented work is an example of realizing and applying of Dijkstra's algorithm to find the shortest path between two vertices in a connected, undirected graph, which is often a solved problem at a time annual International Olympiad in Informatics. For this purpose, are used the technologies, .NET 4.0, Visual Studio 2010, and WPF for the graphical user interface. The implemented program allows drawing an undirected graph, visualizing the shortest path between two vertices and finding its value. This software is a valuable tool for the study of Dijkstra's algorithm and is a great pedagogic instrument. All figures of path visualization included in this paper are actual screenshots of our visualization program.
A fpga implementation of data hiding using lsb matching methodeSAT Journals
Abstract
Data Hiding is one of the familiar methodologies used nowadays to authenticate and resolve the copyrights of the digital data.
This paper discusses the verilog based FPGA implementation of data hiding in grayscale image using the concept of the LSB
Matching technique. In this verilog based data hiding process the secret digital signature is hidden in the host image then
analyzed with the PSNR value and Payload capacity.
Keywords: Data hiding, Least-Significant-bit matching method, Reversible, Gray Scale Image, Real Time
LSB Embedding, FPGA- Implementation etc
A 2 stage data word packet communication decoder using rate 1-by-3 viterbi de...eSAT Journals
Abstract
In the field of consumer electronics the high speed communication technology applications based on hardware and software control are playing a vital role in establishing the benchmarks for catering the operational requirements of the electronic hardware to fulfil the consumer requirements in wired and wireless communication. In the modern era of communication electronics decoding and encoding of any data(s) using high speed and low power features of FPGA devices [1] based on VLSI technology offers less area, hardware portability, data security, high speed network connectivity [2], data error removal capability, complex algorithm realization, etc. Viterbi decoder is a high rate decoder that is very commonly and effectively used method in modern communication hardware. It involves Trellis coded modulation (TCM) scheme for decoding the data. The viterbi decoder is an attempt to reduce the power, speed [1], and cost as compared to normal decoders for wired and wireless communication. The work in this paper proposes a improved data error identification probability design of Viterbi decoders for communication systems with a low power operational performance. The proposed design combines the error identification capability of the viterbi decoder with parity decoder to improve the probability of the overall system in identifying the error during the communication process. Among various functional blocks in the Viterbi decoder, both hardware complexity and decoding speed highly depends on the architecture of the Decoder. The operational blocks of viterbi decoder are combined with parity testing block to identify the error in the viterbi decoded data using parity bit. The present design proposes a multi-stage pipelined architecture of decoder. The former stage is the viterbi decoding stage and the later stage is the parity decoding stage for the identification of error in the communicated data. Any Odd number of errors occuring in the recovered data from the former decoding stage can be identified using the later decoding stage. A general solution to derive the communication using conventional viterbi decoder is also given in this paper. Implementation result of proposed design for a rate 1/3 convolutional code is compared with the conventional design. The design of proposed algorithm is simulated and synthesized successfully Xilinx ISE Tool [3] on Xilinx Spartan 3E FPGA.
Keywords: ACS (Add-Compare-Select), Convolutional Code Rate, Error probability, FPGA, Low Power, Parity Encoder, Pipelining, Trace Back, Viterbi Decoder, Xilinx ISE.
A new RSA public key encryption scheme with chaotic maps IJECEIAES
Public key cryptography has received great attention in the field of information exchange through insecure channels. In this paper, we combine the Dependent-RSA (DRSA) and chaotic maps (CM) to get a new secure cryptosystem, which depends on both integer factorization and chaotic maps discrete logarithm (CMDL). Using this new system, the scammer has to go through two levels of reverse engineering, concurrently, so as to perform the recovery of original text from the cipher-text has been received. Thus, this new system is supposed to be more sophisticated and more secure than other systems. We prove that our new cryptosystem does not increase the overhead in performing the encryption process or the decryption process considering that it requires minimum operations in both. We show that this new cryptosystem is more efficient in terms of performance compared with other encryption systems, which makes it more suitable for nodes with limited computational ability.
REDUCED COMPLEXITY QUASI-CYCLIC LDPC ENCODER FOR IEEE 802.11N VLSICS Design
In this paper, we present a low complexity Quasi-cyclic -low-density-parity-check (QC-LDPC) encoder hardware based on Richardson and Urbanke lower- triangular algorithm for IEEE 802.11n wireless LAN Standard for 648 block length and 1/2 code rate. The LDPC encoder hardware implementation works at 301.433MHz and it can process 12.12 Gbps throughput. We apply the concept of multiplication by constant matrices in GF(2) due to which hardware required is also optimized. Proposed architecture of QC-LDPC encoder will be compatible for high-speed applications. This hardwired architecture is less
complex as it avoids conventionally used block memories and cyclic-shifters.
Radical Data Compression Algorithm Using FactorizationCSCJournals
This work deals with encoding algorithm that conveys a message that generates a “compressed” form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
Design and Implementation A different Architectures of mixcolumn in FPGAVLSICS Design
This paper details Implementation of the Encryption algorithm AES under VHDL language In FPGA by using different architecture of mixcolumn. We then review this research investigates the AES algorithm in FPGA and the Very High Speed Integrated Circuit Hardware Description language (VHDL). Altera Quartus II software is used for simulation and optimization of the synthesizable VHDL code. The set of transformations of both Encryptions and decryption are simulated using an iterative design approach in order to optimize the hardware consumption. Altera Cyclone III Family devices are utilized for hardware evaluation.
A new text encryption algorithm which is based upon a combination between Self-Synchronizing Stream Cipher and chaotic map has been proposed in this paper. The new algorithm encrypts and decrypts text files of different sizes. First of all, the corresponding ASCII values of the plain text are served as input to the permutation operation which diffuses the positions of these values by using hyper-chaotic map. Secondly, the result values are input to substitution operation via1D Bernoulli map. Finally, the resultant vales are XOR feedback with the key.The proposed algorithm has been analyzed using a number of tests and the results show that it has large key space, a uniform histogram, low correlation and it is very sensitive to any change in the plain text or key.
Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...IJERA Editor
The quality of a digital transmission is mainly dependent on the amount of errors introduced into the transmission channel. The codes BCH (Bose-Chaudhuri-Hocquenghem) are widely used in communication systems and storage systems. In this paper a Performance study of BCH error correcting codes is proposed. This paper presents a comparative study of performance between the Bose-Chaudhuri-Hocquenghem codes BCH (15, 7, 2) and BCH (255, 231, 3) using the bit error rate term (BER). The channel and the modulation type are respectively AWGN and PSK where the order of modulation is equal to 2. First, we generated and simulated the error correcting codes BCH (15, 7, 2) and BCH (255, 231, 3) using Math lab simulator. Second, we compare the two codes using the bit error rate term (BER), finally we conclude the coding gain for a BER = 10-4.
Finding the shortest path in a graph and its visualization using C# and WPF IJECEIAES
The shortest path problem is a classic problem in mathematics and computer science with applications in Economics (sequential decision making, analysis of social networks, etc.). The presented work is an example of realizing and applying of Dijkstra's algorithm to find the shortest path between two vertices in a connected, undirected graph, which is often a solved problem at a time annual International Olympiad in Informatics. For this purpose, are used the technologies, .NET 4.0, Visual Studio 2010, and WPF for the graphical user interface. The implemented program allows drawing an undirected graph, visualizing the shortest path between two vertices and finding its value. This software is a valuable tool for the study of Dijkstra's algorithm and is a great pedagogic instrument. All figures of path visualization included in this paper are actual screenshots of our visualization program.
A fpga implementation of data hiding using lsb matching methodeSAT Journals
Abstract
Data Hiding is one of the familiar methodologies used nowadays to authenticate and resolve the copyrights of the digital data.
This paper discusses the verilog based FPGA implementation of data hiding in grayscale image using the concept of the LSB
Matching technique. In this verilog based data hiding process the secret digital signature is hidden in the host image then
analyzed with the PSNR value and Payload capacity.
Keywords: Data hiding, Least-Significant-bit matching method, Reversible, Gray Scale Image, Real Time
LSB Embedding, FPGA- Implementation etc
A 2 stage data word packet communication decoder using rate 1-by-3 viterbi de...eSAT Journals
Abstract
In the field of consumer electronics the high speed communication technology applications based on hardware and software control are playing a vital role in establishing the benchmarks for catering the operational requirements of the electronic hardware to fulfil the consumer requirements in wired and wireless communication. In the modern era of communication electronics decoding and encoding of any data(s) using high speed and low power features of FPGA devices [1] based on VLSI technology offers less area, hardware portability, data security, high speed network connectivity [2], data error removal capability, complex algorithm realization, etc. Viterbi decoder is a high rate decoder that is very commonly and effectively used method in modern communication hardware. It involves Trellis coded modulation (TCM) scheme for decoding the data. The viterbi decoder is an attempt to reduce the power, speed [1], and cost as compared to normal decoders for wired and wireless communication. The work in this paper proposes a improved data error identification probability design of Viterbi decoders for communication systems with a low power operational performance. The proposed design combines the error identification capability of the viterbi decoder with parity decoder to improve the probability of the overall system in identifying the error during the communication process. Among various functional blocks in the Viterbi decoder, both hardware complexity and decoding speed highly depends on the architecture of the Decoder. The operational blocks of viterbi decoder are combined with parity testing block to identify the error in the viterbi decoded data using parity bit. The present design proposes a multi-stage pipelined architecture of decoder. The former stage is the viterbi decoding stage and the later stage is the parity decoding stage for the identification of error in the communicated data. Any Odd number of errors occuring in the recovered data from the former decoding stage can be identified using the later decoding stage. A general solution to derive the communication using conventional viterbi decoder is also given in this paper. Implementation result of proposed design for a rate 1/3 convolutional code is compared with the conventional design. The design of proposed algorithm is simulated and synthesized successfully Xilinx ISE Tool [3] on Xilinx Spartan 3E FPGA.
Keywords: ACS (Add-Compare-Select), Convolutional Code Rate, Error probability, FPGA, Low Power, Parity Encoder, Pipelining, Trace Back, Viterbi Decoder, Xilinx ISE.
A new RSA public key encryption scheme with chaotic maps IJECEIAES
Public key cryptography has received great attention in the field of information exchange through insecure channels. In this paper, we combine the Dependent-RSA (DRSA) and chaotic maps (CM) to get a new secure cryptosystem, which depends on both integer factorization and chaotic maps discrete logarithm (CMDL). Using this new system, the scammer has to go through two levels of reverse engineering, concurrently, so as to perform the recovery of original text from the cipher-text has been received. Thus, this new system is supposed to be more sophisticated and more secure than other systems. We prove that our new cryptosystem does not increase the overhead in performing the encryption process or the decryption process considering that it requires minimum operations in both. We show that this new cryptosystem is more efficient in terms of performance compared with other encryption systems, which makes it more suitable for nodes with limited computational ability.
REDUCED COMPLEXITY QUASI-CYCLIC LDPC ENCODER FOR IEEE 802.11N VLSICS Design
In this paper, we present a low complexity Quasi-cyclic -low-density-parity-check (QC-LDPC) encoder hardware based on Richardson and Urbanke lower- triangular algorithm for IEEE 802.11n wireless LAN Standard for 648 block length and 1/2 code rate. The LDPC encoder hardware implementation works at 301.433MHz and it can process 12.12 Gbps throughput. We apply the concept of multiplication by constant matrices in GF(2) due to which hardware required is also optimized. Proposed architecture of QC-LDPC encoder will be compatible for high-speed applications. This hardwired architecture is less
complex as it avoids conventionally used block memories and cyclic-shifters.
Radical Data Compression Algorithm Using FactorizationCSCJournals
This work deals with encoding algorithm that conveys a message that generates a “compressed” form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
Design and Implementation A different Architectures of mixcolumn in FPGAVLSICS Design
This paper details Implementation of the Encryption algorithm AES under VHDL language In FPGA by using different architecture of mixcolumn. We then review this research investigates the AES algorithm in FPGA and the Very High Speed Integrated Circuit Hardware Description language (VHDL). Altera Quartus II software is used for simulation and optimization of the synthesizable VHDL code. The set of transformations of both Encryptions and decryption are simulated using an iterative design approach in order to optimize the hardware consumption. Altera Cyclone III Family devices are utilized for hardware evaluation.
A new text encryption algorithm which is based upon a combination between Self-Synchronizing Stream Cipher and chaotic map has been proposed in this paper. The new algorithm encrypts and decrypts text files of different sizes. First of all, the corresponding ASCII values of the plain text are served as input to the permutation operation which diffuses the positions of these values by using hyper-chaotic map. Secondly, the result values are input to substitution operation via1D Bernoulli map. Finally, the resultant vales are XOR feedback with the key.The proposed algorithm has been analyzed using a number of tests and the results show that it has large key space, a uniform histogram, low correlation and it is very sensitive to any change in the plain text or key.
Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...IJERA Editor
The quality of a digital transmission is mainly dependent on the amount of errors introduced into the transmission channel. The codes BCH (Bose-Chaudhuri-Hocquenghem) are widely used in communication systems and storage systems. In this paper a Performance study of BCH error correcting codes is proposed. This paper presents a comparative study of performance between the Bose-Chaudhuri-Hocquenghem codes BCH (15, 7, 2) and BCH (255, 231, 3) using the bit error rate term (BER). The channel and the modulation type are respectively AWGN and PSK where the order of modulation is equal to 2. First, we generated and simulated the error correcting codes BCH (15, 7, 2) and BCH (255, 231, 3) using Math lab simulator. Second, we compare the two codes using the bit error rate term (BER), finally we conclude the coding gain for a BER = 10-4.
Nutritional Analysis of Edible Wild Fruit (Zizyphus Jujuba Mill.) Used By Rur...IOSR Journals
Wild edible plants form an important constituent of traditional diets in Himalayas. People of Hamirpur District are very close to Nature wild fruits like Zizyphus jujuba Mill. are one of the important natural resources in the district. The indigenous people of the district have direct dependence on the wild plants for their sustenance. Hundreds of wild edible plants are present in the north- western Himalayas, out of which the ‘Ber’ Zizyphus jujuba Mill. has its religious as well as nutritional advantage over the others. Because of the easy accessibility the fruits are very commonly eaten by the rural populace and the travellers. Biochemical analysis of the dried fruits showed remarkable presence of Carbohydrates 69.12%, Total sugars 27.75%, Phosphorus 133mg/100g, Calcium 199.19mg/100g, Magnesium 84.69mg/100g and Iron 4.15mg/100g on dry weight basis. In other words an important supplementary diet for giving strength to otherwise poor and deprived lot. The present communication aims to highlight the fruits eaten by the inner country side people and what nutritional components are they getting in return
Cash Flow Valuation Mode Lin Discrete TimeIOSR Journals
This research consider the modelling of each cash flow valuation in discrete time. It is shown that
the value of cash flow can be modeled in three equivalent ways under same general assumptions. Also,
consideration is given to value process at a stopping time and/ or the cash flow process stopped at some
stopping times.
The Effects of Rauwolfia Vomitoria Extract on the Liver Enzymes of Carbon Tet...IOSR Journals
Rauwolfia vomitoria is a natural medicinal plant which has been used over the years for the treatment of various ailments. The effects of extract of rauwolfia vomitoria on liver enzymes of carbon tetrachloride induced hepatotoxicity were observed in adult wistar rats weighing between 120g and 190g. They were divided into four groups A,B, C and D of six rats each. Group A served as the control and received 0.41ml of distilled water. The experimental groups B, C and D received different doses of drugs as follows : group B received 0.50ml of rauwolfia vomitoria extract, group C received 0.5ml of carbon tetrachloride and group D received 0.41ml of carbon tetrachloride + 0.4ml of rauwolfia vomitoria extract. The drugs were administered once in a day using intubation method for a period of twenty one days. Twenty four hours after the last administration, the animals were anaesthetized under chloroform vapour and dissected . liver tissues were removed and weighed. Blood samples were collected by cardiac puncture and Serum samples were separated from clot by centrifugation using bench top centrifuge. Activities of serum aspartate aminotransferase, alanine aminotransferase and alkaline phosphatase were determined using randox kit method. The relative liver weight for carbon tetrachloride treated group were significantly higher (p<0.001)><0.001) than the control. The extract exhibited a liver protective effect against carbon tetrachloride induced hepatotoxicity
Video Encryption and Decryption with Authentication using Artificial Neural N...IOSR Journals
Abstract :Multimedia data security is becoming important with the continuous increase of digital
communications on internet. With the rapid development of various multimedia technologies, more and more
multimedia data are generated and transmitted in the medical, commercial, and military fields, which may
include some sensitive information which should not be accessed by or can only be partially exposed to the
general users. . The encryption algorithms developed to secure text data are not suitable for multimedia
application because of the large data size and real time constraint. Therefore, there is a great demand for
secured data storage and transmission techniques. Information security has traditionally been ensured with
data encryption and authentication techniques. The secrecy of communication is maintained by secret key
exchange. In effect the strength of the algorithm depends solely on the length of the key. The presented work
aims at secure video transmission using randomness in encryption algorithm, thereby creating more confusion
to obtain the original data. The security of the original cipher has been enhanced by addition of impurities to
misguide the cryptanalyst. Since the encryption process is one way function, the artificial neural networks are
best suited for this purpose as they possess features like high security, no distortion and its ability to perform for
non linear input-output characteristics, In the presented work the need for key exchange is also eliminated,
which is otherwise a perquisite for most of the algorithms used today. The proposed work finds its application in
medical imaging systems, military image database communication and confidential video conferencing, and
similar such application. The results are obtained through the use of MATLAB 7.14.0
Keywords: Artificial Neural networks, Back propagation algorithm, video encryption and decryption, cipher
and decipher
Academic Libraries: Engine Breed Spuring Innovation for Competitiveness and S...IOSR Journals
Abstract: The research was designed to unveil out how academic libraries have assisted institutions in bring up candidates to work in industrial capabilities towards the achievement of competitiveness and sustainable economic growth in society. This study was drawn from the extensive literature review and case studies to discuss the following economic value of accessing information using the library; innovation and services in academic libraries; initiatives, resources and activities that facilitate access to information in the library; FUTO Library and library responding to academic programmes. The following research questions helped ascertain what has made the academic libraries tick in this direction and to find answers to them; what status of people does the library have? To what extent material resources is provided and used by the library patrons? What ICT services are been offered by the library and of what purpose are they offered? What strategy development implied by the library to meet patron’s needs? What are the obstacles the library may encounter in the process to achieve competitiveness and sustainability to serve library patrons? Frequency and percentages were deployed for the study due is within the study environment area. Collection of Data was through the use of questionnaire developed by the researchers. Ninety-five copies of the questionnaires were distributed to staff of FUTO Library, out of which 85 were returned and found useful. Finding was analyzed and results ascertained. Keyword: Library, growth, process, ICT, resources, services, innovation
Abstract: An audio mixer amplifier is a device that translates a signal of one frequency band to another. It will accept many inputs at different frequencies and generate an output of the combination or sum of the frequencies. The mixer circuit provides good gain to weak audio signals. It can be used in front of an R.F. oscillator to make an R.F. receiver that is very sensitive to sound. Each input can be independently controlled by a variable resistor. There is also a provision for a balance control to fade out signal while simultaneously fading in the other. Key Words: Audio Mixer, Frequency, Signal, Circuit
Antimicrobial Screening of Vanga Vennai and Mathan Thailam for Diabetic Foot ...IOSR Journals
Siddha medicine is one of the ancient traditional science which is specialized in external medicines (Pura marunthukal). Diabetic patients develop foot ulcers as a major complication of Diabetes mellitus due to high susceptibility to infection which often leads to amputation. Diabetic foot infection is a combination of both aerobic and anaerobic organisms. Thus there is a need for effective and selective treatment strategies. This article focuses on antimicrobial screening of Vanga vennai (VV), Mathan Thailam (MT) and combination of both (VV+MT) which are the classical Siddha preparations. The methodology involved in the study is Agar well dilution method for anti microbial activity. Thus it paves a new way in treating infections of Diabetic foot ulcers. This preliminary study reveals these Siddha formulations as an effective therapeutic strategy for the management of Diabetic foot ulcer.
Synthesis and structural characterization of Al-CNT metal matrix composite us...IOSR Journals
In the present study Carbon nanotube(CNT) reinforced (Al) composite was synthesized by physical mixing method and CNT’s distribution within the matrix was traced and characterized. Ultrasonication was used to disperse the CNTs in Al nano powder followed by magnetic stirring. Samples of different weight percentage of CNT(0.5wt %, 1wt %, 1.5wt %, 2wt % of CNT) were obtained using this technique. Stuructural characteristics of the samples were explored using X-Ray diffraction(XRD),Scanning Electronic Microscope(SEM) and Transmission Electronic Microscope(TEM) technologies. Uniform distribution of CNTs within the matrix and the strength of metal/CNT interface were confirmed from SEM and TEM images. Along with the distribution of CNTs, XRD also validates the phase composition of the composite.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and
performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar
codes in series to create robust error-correcting codes. The primary objective here is to optimize the
behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving,
a technique that rearranges bits to maximize the separation between originally neighboring symbols. This
rearrangement is instrumental in converting error clusters into distributed errors across the entire
sequence. To evaluate their performance, we proposed to model a communication system with seven
components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel
decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different
block lengths and code rates. Next, we compare the bit error rate (BER) performance between our
proposed method and polar codes.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.
An efficient reconfigurable code rate cooperative low-density parity check co...IJECEIAES
In recent days, extensive digital communication process has been performed. Due to this phenomenon, a proper maintenance of authentication, communication without any overhead such as signal attenuation code rate fluctuations during digital communication process can be minimized and optimized by adopting parallel encoder and decoder operations. To overcome the above-mentioned drawbacks by using proposed reconfigurable code rate cooperative (RCRC) and low-density parity check (LDPC) method. The proposed RCRC-LDPC is capable to operate over gigabits/sec data and it effectively performs linear encoding, dual diagonal form, widens the range of code rate and optimal degree distribution of LDPC mother code. The proposed method optimize the transmission rate and it is capable to operate on 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. The proposed method optimizes the transmission rate and is capable to operate on a 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. the proposed method's implementation has been carried out using MATLAB and as per the simulation result, the proposed method is capable of reaching a throughput efficiency greater than 8.2 (1.9) gigabits per second with a clock frequency of 160 MHz.
CODING SCHEMES FOR ENERGY CONSTRAINED IOT DEVICESijmnct
This paper investigates the application of advanced forward error correction techniques mainly: lowdensity parity checks (LDPC) code and polar code for IoT networks. These codes are under consideration
for 5G systems. Different code parameters such as code rate and a number of decoding iterations are used
to show their effect on the performance of the network. LDPC is performed better than polar code, over the
IoT network scenario considered in the work, for the same coding rate and the number of decoding
iterations. Considering bit error rate (BER) performance, LDPC with rate1/3 provided an improvement of
up to 2.6 dB for additive white Gaussian noise (AWGN) channel, and 2 dB for SUI-3 (frequency selective
fading channel model). LDPC code gives an improvement in throughput of about 12% as compared to
polar code with a coding rate of 2/3 over AWGN channel. The corresponding values over SUI-3 channel
are about 10%. Finally, in comparison with LDPC, polar code shows better energy saving for large
number of decoding iterations and high coding rates.
CODING SCHEMES FOR ENERGY CONSTRAINED IOT DEVICESijmnct_journal
This paper investigates the application of advanced forward error correction techniques mainly: lowdensity parity checks (LDPC) code and polar code for IoT networks. These codes are under consideration for 5G systems. Different code parameters such as code rate and a number of decoding iterations are used
to show their effect on the performance of the network. LDPC is performed better than polar code, over the IoT network scenario considered in the work, for the same coding rate and the number of decoding iterations. Considering bit error rate (BER) performance, LDPC with rate1/3 provided an improvement of
up to 2.6 dB for additive white Gaussian noise (AWGN) channel, and 2 dB for SUI-3 (frequency selective fading channel model). LDPC code gives an improvement in throughput of about 12% as compared to polar code with a coding rate of 2/3 over AWGN channel. The corresponding values over SUI-3 channel
are about 10%. Finally, in comparison with LDPC, polar code shows better energy saving for large number of decoding iterations and high coding rates.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Efficient Low Complexity Low Latency Architecture for Matching of Data Enc...IJERA Editor
An efficient architecture is introduced for the matching of data encoded with error correcting code using a cache memory is presented in brief. Using cache memory it reduces latency and complexity to an fine level. And this architecture further reduces the dynamic power without affecting the time. For the comparison of data, hamming distance along used to check whether the data match the data kept in main memory. Instead of butterfly formed weight accumulator(previous work) here no other mechanism is presented for calculating hamming distance.
Design and implementation of single bit error correction linear block code sy...TELKOMNIKA JOURNAL
Linear block code (LBC) is an error detection and correction code that is widely used in
communication systems. In this paper a special type of LBC called Hamming code was implemented and
debugged using FPGA kit with integrated software environments ISE for simulation and tests the results of
the hardware system. The implemented system has the ability to correct single bit error and detect two bits
error. The data segments length was considered to give high reliability to the system and make an
aggregation between the speed of processing and the hardware ability to be implemented. An adaptive
length of input data has been consider, up to 248 bits of information can be handled using Spartan 3E500
with 43% as a maximum slices utilization. Input/output data buses in FPGA have been customized to meet
the requirements where 34% of input/output resources have been used as maximum ratio. The overall
hardware design can be considerable to give an optimum hardware size for the suitable information rate.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Lightweight hamming product code based multiple bit error correction coding s...journalBEEI
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other error control schemes. The low complexity and excellent residual flit error rate make the proposed code suitable for on chip interconnection links.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
implementation of area efficient high speed eddr architectureKumar Goud
Abstract-This project presents an EDDR design, based on the residue-and-quotient (RQ) code, to embed into motion estimation (ME) for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. The functional verification and synthesis can be done by Xilinx ISE. That is when compare to the existing design the implemented design area and timing will be reduced.
Index Terms—Area overhead, data recovery, error detection, reliability, residue-and-quotient (RQ) code, Xilinx ISE
Area efficient parallel LFSR for cyclic redundancy check IJECEIAES
Cyclic Redundancy Check (CRC), code for error detection finds many applications in the field of digital communication, data storage, control system and data compression. CRC encoding operation is carried out by using a Linear Feedback Shift Register (LFSR). Serial implementation of CRC requires more clock cycles which is equal to data message length plus generator polynomial degree but in parallel implementation of CRC one clock cycle is required if a whole data message is applied at a time. In previous work related to parallel LFSR, hardware complexity of the architecture reduced using a technique named state space transformation. This paper presents detailed explaination of search algorithm implementation and technique to find number of XOR gates required for different CRC algorithms. This paper presents a searching algorithm and new technique to find the number of XOR gates required for different CRC algorithms. The comparison between proposed and previous architectures shows that the number of XOR gates are reduced for CRC algorithms which improve the hardware efficiency. Searching algorithm and all the matrix computations have been performed using MATLAB simulations.
Fpga implementation of (15,7) bch encoder and decoder for text messageeSAT Journals
Abstract In a communication channel, noise and interferences are the two main sources of errors occur during the transmission of the message. Thus, to get the error free communication error control codes are used. This paper discusses, FPGA implementation of (15, 7) BCH Encoder and Decoder for text message using Verilog Hardware Description Language. Initially each character in a text message is converted into binary data of 7 bits. These 7 bits are encoded into 15 bit codeword using (15, 7) BCH encoder. If any 2 bit error in any position of 15 bit codeword, is detected and corrected. This corrected data is converted back into an ASCII character. The decoder is implemented using the Peterson algorithm and Chine’s search algorithm. Simulation was carried out by using Xilinx 12.1 ISE simulator, and verified results for an arbitrarily chosen message data. Synthesis was successfully done by using the RTL compiler, power and area is estimated for 180nm Technology. Finally both encoder and decoder design is implemented on Spartan 3E FPGA. Index Terms: BCH Encoder, BCH Decoder, FPGA, Verilog, Cadence RTL compiler
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
E010422834
1. IOSR Journal of Electronics and Communication Engineering (IOSR-JECE)
e-ISSN: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 4, Ver. II (Jul - Aug .2015), PP 28-34
www.iosrjournals.org
DOI: 10.9790/2834-10422834 www.iosrjournals.org 28 | Page
Synthesis of Data Encoded With Error Correcting Codes Using
BWA Architecture
Steffi Philip Mulamoottil1
Dr.E.Nagabhooshanam2
Nimmagadda Ravali 3
1.M.Tech VLSI System Design Student, Sridevi Womens Engineering College Hyderabad
2. HOD ECE Department ,Sridevi Womens Engineering College Hyderabad
3.Assistant Professor, Sridevi Womens Engineering College Hyderabad
I. Introduction
Data comparison is widely used in computing systems to perform many operations such as tag
matching in cache memory, virtual to physical address translation in TLB. Because of such prevalence it is
important to implement the comparison circuit with low hardware complexity. Data Fix has developed a series
of algorithms and programs that utilize token-based matching in conjunction with various fuzzy-matching
functions. For a given block of source data each token is compared to records from the target dataset. The data
comparison usually resides in the critical path of the components that are devised to increase system
performance. For example caches and TLBs whose outputs determine the flow of succeeding operations in a
pipeline. Therefore the circuit must be designed to have as low latency as possible as latency describes the time
interval between stimulation and response. Besides of not having low latency, the components will be
disqualified from serving as accelerators and overall performance of the whole system would be severely
deteriorated. Recently computers employ error correcting codes to protect data and improve reliability.
Complicated decoding procedure precede the data comparison which must elongates the critical path and
exacerbates the complexity overhead. Thus it seems to be harder to meet the above design constraints.
Numerous algorithms have been developed to allow for so-called ‘fuzzy’ matching, including soundex, NYSIIS,
Jaro-Winkler, and many different types of character transposition functions. While these approaches are useful
for matching specific fields they are not capable of accurately comparing larger blocks of data.
The most recent solution for matching problem is the direct compare method. This method encodes the
incoming data and then compared it with retrieved data that has been encoded as well. Therefore this method
eliminates the complicated decoding procedure from critical path as it is more complex and consumes more
time. While performing comparison method, it does not examine whether the retrieved data is exactly the same
as the incoming data. Instead, it checks if the retrieved data resides in the error correctable range of the
codeword corresponding to the incoming data. While performing checking, an additional circuit for computing
hamming distance is necessary. Hamming distance provides the number of different bits between two code
words and for computing hamming distance saturate adder is considered as a building block. In addition SA
forces its output not to be greater than the number of detectable errors by more than one. As ECC detects double
and corrects single bit error and is represented in a systematic form in which data and parity parts are completely
separated.
In this brief we renovate SA based direct compare architecture to reduce latency and hardware complexity by
resolving the above mentioned drawbacks. More specifically we consider the error correcting codes in
systematic form while designing the proposed architecture and provides a low complexity processing element
that computes the hamming distance faster. Therefore latency and hardware complexity are decreased when
compared with SA based architecture.
The rest of the paper is organised as follows. Section 2 describes the previous works. Section 3
describes the proposed architecture and section 4 describes the evaluation part. Finally conclusion is described
in section 5.
II. Previous Works
This section describes the conventional decode and compare architecture and encode and compare
architecture based on the direct compare method. For the sake of concreteness, only tag matching in cache
memory is discussed in this brief, but the proposed architecture can be applied to similar applications without
loss of generality.
2. Synthesis Of Data Encoded With Error Correcting .......
DOI: 10.9790/2834-10422834 www.iosrjournals.org 29 | Page
Fig 1. (a) Decode-and-compare architecture and (b) encode-and-compare architecture.
A. Decode-and-compare architecture
In decode and compare architecture the n-bit retrieved code word is first decoded to extract the original
k-bit tag. The extracted k-bit tag is then compared with the k-bit tag field of the incoming address to determine
whether the tags are matched or not as shown in Fig. 1(a). .As the retrieved codeword should go through the
decoder before being compared with the incoming tag, the critical path is too long to be employed in a practical
cache system designed for high-speed access. Since the decoder is one of the most complicated processing
elements, in addition, the complexity overhead is not negligible.
B. Encode-and-Compare Architecture
Note that decoding is usually more complex and takes more time than encoding as it encompasses a
series of error detection or syndrome calculation, and error correction. The implementation results support the
claim. To resolve the drawbacks of the decode-and compare architecture, therefore, the decoding of a retrieved
codeword is replaced with the encoding of an incoming tag in the encode-and-compare architecture More
precisely, a k-bit incoming tag is first encoded to the corresponding n-bit codeword X and compared with an n-
bit retrieved codeword Y as shown in Fig. 1(b).
The comparison is to examine how many bits the two code words differ, not to check if the two code
words are exactly equal to each other. For this, we compute the Hamming distance d between the two code
words and classify the cases according to the range of d.
Let tmax and rmax denote the numbers of maximally correctable and detectable errors, respectively. The cases are
summarized as follows.
Condition Result
1)d=0 X matches Y exactly
2)0 ≤ d ≤ tmax X will match Y provided at the most tmax errors in Y are corrected
3)tmax < d ≤ rmax Y has detectable but uncorrectable errors.
4)rmax < d X does not match Y
Assuming that the incoming address has no errors, we can regard the two tags as matched if d is in
either the first or the second ranges. In this way, while maintaining the error-correcting capability, the
architecture can remove the decoder from its critical path at the cost of an encoder being newly introduced. Note
that the encoder is, in general, much simpler than the decoder, and thus the encoding cost is significantly less
than the decoding cost. Since the above method needs to compute the Hamming distance, presented a circuit
dedicated for the computation.
Fig 2. SA-based architecture supporting the direct compare method
3. Synthesis Of Data Encoded With Error Correcting .......
DOI: 10.9790/2834-10422834 www.iosrjournals.org 30 | Page
The circuit shown in Fig. 2 first performs XOR operations for every pair of bits in X and Y so as to
generate a vector representing the bitwise difference of the two code words. The following half adders (HAs)
are used to count the number of 1’s in two adjacent bits in the vector. The numbers of 1’s are accumulated by
passing through the following SA tree. In the SA tree, the accumulated value z is saturated to rmax + 1 if it
exceeds rmax. More precisely, given inputs x and y, z can be expressed as follows:
The final accumulated value indicates the range of d. As the compulsory saturation necessitates additional logic
circuitry, the complexity of a SA is higher than the conventional adder.
III. Butterfly Weight Accumulator (BWA) Architecture
This section describes a new architecture that can reduce the latency and complexity of the data
comparison by using the characteristics of systematic codes. In addition, a new processing element is presented
in this brief to reduce latency and complexity further
.
A. Datapath Design for Systematic Codes
In the SA-based architecture the comparison of two code words is invoked after the incoming tag is
encoded. Therefore, the critical path consists of a series of the encoding and the n-bit comparison as shown in
Fig. 3(a).
Fig. 3. Timing diagram of the tag match in (a) direct compare method and (b) proposed architecture.
However,did not consider the fact that, in practice, the ECC codeword is of a systematic form in which the data
and parity parts are completely separated as shown in Fig. 4.
Fig. 4. Systematic representation of an ECC codeword.
As the data part of a systematic codeword is exactly the same as the incoming tag field, it is
immediately available for comparison while the parity part becomes available only after the encoding is
completed. Grounded on this fact, the comparison of the k-bit tags can be started before the remaining (n–k)-bit
comparison of the parity bits. In the proposed architecture, therefore, the encoding process to generate the parity
bits from the incoming tag is performed in parallel with the tag comparison, reducing the overall latency as
shown in Fig. 3(b).
B. Architecture for Computing the Hamming Distance
The proposed architecture grounded on the data path design is shown in Fig. 5. The proposed
architecture contains multiple butterfly-formed weight accumulators (BWAs) proposed to improve the latency
and complexity of the Hamming distance computation.
4. Synthesis Of Data Encoded With Error Correcting .......
DOI: 10.9790/2834-10422834 www.iosrjournals.org 31 | Page
Fig.5 : Proposed architecture optimized for systematic code words
The basic function of the BWA is to count the number of 1’s among its input bits. It consists of
multiple stages of Half Adders as shown in Fig. 6(a), where each output bit of a HA is associated with a weight.
The HAs in a stage are connected in a butterfly form so as to accumulate the carry bits and the sum bits
of the upper stage separately. In other words, both inputs of a HA in a stage, except the first stage, are either
carry bits or sum bits computed in the upper stage. This connection method leads to a property that if an output
bit of a HA is set, the number of 1’s among the bits in the paths reaching the HA is equal to the weight of the
output bit. In Fig. 6(a), for example, if the carry bit of the gray-coloured HA is set, the number of 1’s among the
associated input bits, i.e., A, B, C, and D, is 2. At the last stage of Fig. 6(a), the number of 1’s among the input
bits, d, can be calculated as
Since what we need is not the precise Hamming distance but the range it belongs to, it is possible to simplify the
circuit. When rmax = 1, for example, two or more than two 1’s among the input bits can be regarded as the
same case that falls in the fourth range. In that case, we can replace several HAs with a simple OR-gate tree as
shown in Fig. 6(b). This is an advantage over the SA that resorts to the compulsory saturation expressed in (1).
Fig. 6. Proposed BWA. (a) General structure and (b) new structure revised for the matching of ECC-protected
data. Note that sum-bit lines are dotted for visibility
5. Synthesis Of Data Encoded With Error Correcting .......
DOI: 10.9790/2834-10422834 www.iosrjournals.org 32 | Page
Note that in Fig. 6, there is no overlap between any pair of two carry-bit lines or any pair of two sum-
bit lines. As the overlaps exist only between carry-bit lines and sum-bit lines, it is not hard to resolve overlaps in
the contemporary technology that provides multiple routing layers no matter how many bits a BWA takes. We
now explain the overall architecture in more detail. Each XOR stage in Fig. 5 generates the bitwise difference
vector for either data bits or parity bits, and the following processing elements count the number of 1’s in the
vector, i.e., the Hamming distance. Each BWA at the first level is in the revised form shown in Fig. 6(b), and
generates an output from the OR-gate tree and several weight bits from the HA trees. In the interconnection,
such outputs are fed into their associated processing elements at the second level. The output of the OR-gate tree
is connected to the subsequent OR-gate tree at the second level, and the remaining weight bits are connected to
the second level BWAs according to their weights. More precisely, the bits of weight w are connected to the
BWA responsible for w-weight inputs. Each BWA at the second level is associated with a weight of a power of
two that is less than or equal to Pmax, where Pmax is the largest power of two that is not greater than rmax + 1.
As the weight bits associated with the fourth range are all ORed in the revised BWAs, there is no need
to deal with the powers of two that are larger than Pmax. For example, let us consider a simple (8, 4) single-
error correction double-error detection code. The corresponding first and second level circuits are shown in Fig.
7. Note that the encoder and XOR banks are not drawn in Fig. 7 for the sake of simplicity.
Since rmax = 2, Pmax = 2 and there are only two BWAs dealing with weights 2 and 1 at the second level. As the
bits of weight 4 fall in the fourth range, they are ORed. The remaining bits associated with weight 2 or 1 are
connected to their corresponding BWAs. Note that the interconnection induces no hardware complexity, since it
can be achieved by a bunch of hard wiring. Taking the outputs of the preceding circuits, the decision unit finally
determines if the incoming tag matches the retrieved codeword by considering the four ranges of the Hamming
distance. The decision unit is in fact a combinational logic of which functionality is specified by a truth table
that takes the outputs of the preceding circuits as inputs. For the (8, 4) code that the corresponding first and
second level circuits are shown in Fig. 7, the truth table for the decision unit is described in Table I. Since U and
V cannot be set simultaneously, such cases are implicitly included in do not care terms in Table I.
6. Synthesis Of Data Encoded With Error Correcting .......
DOI: 10.9790/2834-10422834 www.iosrjournals.org 33 | Page
C. General Expressions for the Complexity and the Latency
The complexity as well as the latency of combinational circuits heavily depends on the algorithm
employed. In addition, as the complexity and the latency are usually conflicting with each other, it is
unfortunately hard to derive an analytical and fully deterministic equation that shows the relationship between
the number of gates and the latency for the proposed architecture and also for the conventional SA-based
architecture. To circumvent the difficulty in analytical derivation, we present instead an expression that can be
used to estimate the complexity and the latency by employing some variables for the nondeterministic parts. The
complexity of the proposed architecture, C, can be expressed as
where CXOR, CENC, C2nd, CDU, and CBWA(n) are the complexities of XOR banks, an encoder, the second level
circuits, the decision unit, and a BWA for n inputs, respectively. Using the recurrence relation, CBWA(n) can be
calculated as
where the seed value, CBWA(1), is 0. Note that when a + b = c, CBWA(a) + CBWA(b) ≤ CBWA(c) holds for all
positive integers a, b, and c. Because of the inequality and the fact that an OR-gate tree for n inputs is always
simpler than a BWA for n inputs, both CBWA(k) + CBWA(n –k) and C2nd are bounded by CBWA(n). The latency of
the proposed architecture, L, can be expressed as
where LXOR, LENC, L2nd, LDU, and LBWA(n) are the latencies of an XOR bank, an encoder, the second
level circuits, the decision unit, and a BWA for n inputs, respectively. Note that the latencies of the OR-gate tree
and BWAs for x ≤ n inputs at the second level are all bounded by [log2 n]. As one of BWAs at the first level
finishes earlier than the other, some components at the second level may start earlier. Similarly, some BWAs or
the OR-gate tree at the second level may provide their output earlier to the decision unit so that the unit can
begin its operation without waiting for all of its inputs. In such cases, L2nd and LDU can be partially hidden by
the critical path of the preceding circuits, and L becomes shorter than the given expression.
IV. Proposed BWA Architecture
The above BWA architecture describes the operation of (8,4)code. The same architecture with some
extensions is used to built the proposed architecture. Proposed architecture describes the analysis of
(16,11),(24,18)and(40,33)code. The above mentioned (8,4) code takes only 8 bits of code words. Besides, the
proposed architecture increases the bit length of the code words so that we can process more information.
Fig 8(a): BWA For Parity of (16,11)code
7. Synthesis Of Data Encoded With Error Correcting .......
DOI: 10.9790/2834-10422834 www.iosrjournals.org 34 | Page
Fig 8(b) : BWA For Incoming tag of (16,11)code
The above figure shows the BWA architecture of (16,11) codeword with 11 bits of incoming tag information
and 5 bits of parity information. Likewise we can draw architecture for (24,18) and (40,33).
V. Evaluation
For a set of four codes including the (31,25) code,Table 2 shows the latencies and hardware
complexities resulting from three architectures:: 1)the decode-compare 2)the SA based direct compare and the
3) proposed ones. When measured the metrics at the gate level first and then implemented the circuits in CMOS
technology to provide more realistic results by deliberating some practical factors e.g. gate sizing and wiring
delays. The latency is measured from th time when the incoming address is completely encoded. As the critical
path starts from the arrival of the incoming address to a cache memory the encoding delay must be however
included in the latency computation . The latency values in Table 2 are all measured in this way .Besides critical
path delays in table 2 are obtained by performing post layout simulations and equivalent gate counts are
measured by counting a two –input NAND gate as one.
As shown in table 2 the proposed architecture is effective in reducing the latency as well as the
hardware complexity even with considering the practical factors .Note that the effectiveness of the proposed
architecture over the SA based one in shortening the latency gets larger as the size of the code word increases
.The reason is as follows .The latencies of the SA based architecture and the proposed one are dominated by
SAs and HAs respectively. As the bit width doubles atleast one more stages of the SAs or HAs needs to be
added.Sin ce the critical path of the HA consists of only one gate while that of SA has several gates,the
proposed architecture achieves lower latency than its SA based counterpart,especially for long codewords.
VI. Conclusion
To reduce latency and hardware complexity a new architecture has been presented in this brief for
matching the data protected with an ECC.The proposed architecture examines whether the incoming data
matches the stored data if a certain number of erroneous bits are corrected. To reduce latency the comparison of
the data is parallelised with the encoding process that generates parity information.In addition an efficient
processing architecture has been presented to further minimize the latency and complexity .The proposed
architecture is effective in reducing the latency as well as the complexity . Though this brief focuses oly on tag
matching in cache memory , the proposed method is applicable to diverse applications that need such
comparison.