The document proposes a new normalization technique for compensating the optimistic output information produced by a soft output Viterbi algorithm (SOVA) decoder in a turbo decoder. The technique counts the sign differences between the a-priori and extrinsic information to determine a normalization factor for each data block. Simulations show the proposed technique achieves about a 0.2 dB coding gain improvement on average compared to other techniques, while reducing the number of iterations needed for decoding by up to 21.
Nowadays exponential advancement in reversible comp
utation has lead to better fabrication and
integration process. It has become very popular ove
r the last few years since reversible logic circuit
s
dramatically reduce energy loss. It consumes less p
ower by recovering bit loss from its unique input-o
utput
mapping. This paper presents two new gates called
RC-I and RC-II to design an n-bit signed binary
comparator where simulation results show that the p
roposed circuit works correctly and gives significa
ntly
better performance than the existing counterparts.
An algorithm has been presented in this paper for
constructing an optimized reversible n-bit signed c
omparator circuit. Moreover some lower bounds have
been proposed on the quantum cost, the numbers of g
ates used and the number of garbage outputs
generated for designing a low cost reversible sign
ed comparator. The comparative study shows that the
proposed design exhibits superior performance consi
dering all the efficiency parameters of reversible
logic
design which includes number of gates used, quantum
cost, garbage output and constant inputs. This
proposed design has certainly outperformed all the
other existing approaches.
EVOLUTION OF STRUCTURE OF SOME BINARY GROUP-BASED N-BIT COMPARATOR, N-TO-2N D...VIT-AP University
The document describes the design of reversible comparators and decoders using a novel 4x4 reversible gate called the inventive gate. It introduces the inventive gate and shows how it can realize various logic functions like AND, OR, XOR, etc. It then presents the design of a 2-to-4 reversible decoder using the inventive gate that generates 2 garbage outputs and requires 4 gates. Lemmas are provided to show an n-to-2n reversible decoder can be designed using a minimum of 2n+1 gates. The document goes on to describe the design of 1-bit, 2-bit, 8-bit, 32-bit and n-bit reversible comparators using the inventive gate with low values for
Iaetsd low power flip flops for vlsi applicationsIaetsd Iaetsd
The document discusses low power flip flops for use in digitally controlled delay lines (DCDLs). It first describes issues with conventional NAND-based DCDLs, such as glitches that occur when the control code changes. It then proposes using a Low Power Forced Stack Clocked Pass Transistor flip-flop (LP-FSCPTFF) as the driving circuit in the DCDL. This flip-flop architecture consumes less power and has lower delay than dual edge triggered flip flops used conventionally. Simulation results show the proposed DCDL using LP-FSCPTFF reduces power consumption by up to 90% compared to other efficient flip-flop designs. The low power DCD
A Novel Design of 4 Bit Johnson Counter Using Reversible Logic Gatesijsrd.com
In recent years, reversible logic circuits have attracted considerable attention in improving some fields like nanotechnology, quantum computing, cryptography, optical computing and low power design of circuits due to its low power dissipating characteristic. In this paper we proposed the design of 4-bit Johnson counter which uses reversible gates and derived quantum cost, constant inputs, garbage output and number of gates to implement it.
Simple regenerating codes: Network Coding for Cloud StorageKevin Tong
The document presents Simple Regenerating Codes (SRC) for efficient data repair in cloud storage systems. SRC combines MDS codes for reliability with XOR operations to allow repair using minimal bandwidth and disk I/O. Simulations show SRC reduces storage costs compared to replication and maintains high reliability while improving repair scalability through reduced repair bandwidth and disk accesses.
Network Coding for Distributed Storage Systems(Group Meeting Talk)Jayant Apte, PhD
Reviews work of Koetter et al. and Dimakis et al.
The former provides an algebraic framework for linear network coding. The latter reduces the so called repair problem to single-source multicast network-coding problem and shows that there is a tradeoff between amount of data stored in a distributed sturage system and amount of data transfer required to repair the system if a node(hard-drive) fails.
This document discusses turbo codes, which are a type of error correction code built by parallel concatenating two convolutional code blocks. It focuses on investigating the iterative decoding of turbo codes. The bit error rate is calculated over multiple iterations of decoding and plotted against signal to noise ratio. Quadratic permutation polynomial and random interleavers are analyzed. Results show turbo codes with a memory of 3 and 1280 bit interleaver achieve the best performance, reaching a bit error rate of 10-5 at -1.2 dB after 10 iterations of decoding.
Iaetsd implementation of power efficient iterative logarithmic multiplier usi...Iaetsd Iaetsd
This document describes the design and implementation of a power efficient iterative logarithmic multiplier using Mitchell's algorithm and reversible logic. It involves converting multiplication to addition using logarithmic numbers. The proposed design implements a basic block consisting of leading one detectors, encoders, barrel shifters and a decoder to calculate an approximate product. Error correction circuits are then cascaded with the basic blocks to improve accuracy. The 4x4 reversible logarithmic multiplier is designed and simulated using Xilinx tools, demonstrating lower power consumption through the use of reversible logic.
Nowadays exponential advancement in reversible comp
utation has lead to better fabrication and
integration process. It has become very popular ove
r the last few years since reversible logic circuit
s
dramatically reduce energy loss. It consumes less p
ower by recovering bit loss from its unique input-o
utput
mapping. This paper presents two new gates called
RC-I and RC-II to design an n-bit signed binary
comparator where simulation results show that the p
roposed circuit works correctly and gives significa
ntly
better performance than the existing counterparts.
An algorithm has been presented in this paper for
constructing an optimized reversible n-bit signed c
omparator circuit. Moreover some lower bounds have
been proposed on the quantum cost, the numbers of g
ates used and the number of garbage outputs
generated for designing a low cost reversible sign
ed comparator. The comparative study shows that the
proposed design exhibits superior performance consi
dering all the efficiency parameters of reversible
logic
design which includes number of gates used, quantum
cost, garbage output and constant inputs. This
proposed design has certainly outperformed all the
other existing approaches.
EVOLUTION OF STRUCTURE OF SOME BINARY GROUP-BASED N-BIT COMPARATOR, N-TO-2N D...VIT-AP University
The document describes the design of reversible comparators and decoders using a novel 4x4 reversible gate called the inventive gate. It introduces the inventive gate and shows how it can realize various logic functions like AND, OR, XOR, etc. It then presents the design of a 2-to-4 reversible decoder using the inventive gate that generates 2 garbage outputs and requires 4 gates. Lemmas are provided to show an n-to-2n reversible decoder can be designed using a minimum of 2n+1 gates. The document goes on to describe the design of 1-bit, 2-bit, 8-bit, 32-bit and n-bit reversible comparators using the inventive gate with low values for
Iaetsd low power flip flops for vlsi applicationsIaetsd Iaetsd
The document discusses low power flip flops for use in digitally controlled delay lines (DCDLs). It first describes issues with conventional NAND-based DCDLs, such as glitches that occur when the control code changes. It then proposes using a Low Power Forced Stack Clocked Pass Transistor flip-flop (LP-FSCPTFF) as the driving circuit in the DCDL. This flip-flop architecture consumes less power and has lower delay than dual edge triggered flip flops used conventionally. Simulation results show the proposed DCDL using LP-FSCPTFF reduces power consumption by up to 90% compared to other efficient flip-flop designs. The low power DCD
A Novel Design of 4 Bit Johnson Counter Using Reversible Logic Gatesijsrd.com
In recent years, reversible logic circuits have attracted considerable attention in improving some fields like nanotechnology, quantum computing, cryptography, optical computing and low power design of circuits due to its low power dissipating characteristic. In this paper we proposed the design of 4-bit Johnson counter which uses reversible gates and derived quantum cost, constant inputs, garbage output and number of gates to implement it.
Simple regenerating codes: Network Coding for Cloud StorageKevin Tong
The document presents Simple Regenerating Codes (SRC) for efficient data repair in cloud storage systems. SRC combines MDS codes for reliability with XOR operations to allow repair using minimal bandwidth and disk I/O. Simulations show SRC reduces storage costs compared to replication and maintains high reliability while improving repair scalability through reduced repair bandwidth and disk accesses.
Network Coding for Distributed Storage Systems(Group Meeting Talk)Jayant Apte, PhD
Reviews work of Koetter et al. and Dimakis et al.
The former provides an algebraic framework for linear network coding. The latter reduces the so called repair problem to single-source multicast network-coding problem and shows that there is a tradeoff between amount of data stored in a distributed sturage system and amount of data transfer required to repair the system if a node(hard-drive) fails.
This document discusses turbo codes, which are a type of error correction code built by parallel concatenating two convolutional code blocks. It focuses on investigating the iterative decoding of turbo codes. The bit error rate is calculated over multiple iterations of decoding and plotted against signal to noise ratio. Quadratic permutation polynomial and random interleavers are analyzed. Results show turbo codes with a memory of 3 and 1280 bit interleaver achieve the best performance, reaching a bit error rate of 10-5 at -1.2 dB after 10 iterations of decoding.
Iaetsd implementation of power efficient iterative logarithmic multiplier usi...Iaetsd Iaetsd
This document describes the design and implementation of a power efficient iterative logarithmic multiplier using Mitchell's algorithm and reversible logic. It involves converting multiplication to addition using logarithmic numbers. The proposed design implements a basic block consisting of leading one detectors, encoders, barrel shifters and a decoder to calculate an approximate product. Error correction circuits are then cascaded with the basic blocks to improve accuracy. The 4x4 reversible logarithmic multiplier is designed and simulated using Xilinx tools, demonstrating lower power consumption through the use of reversible logic.
A review on reversible logic gates and their implementationDebraj Maji
The document provides an overview of reversible logic gates and their implementation. It discusses how reversible logic gates can reduce power dissipation compared to irreversible logic gates. Some key reversible logic gates are described, including NOT, CNOT, Feynman, Toffoli, and Fredkin gates. Their truth tables and quantum costs are given. The document serves to introduce researchers to reversible logic gates that can be used to design more complex computing circuits with potential applications in quantum computing and low-power electronics.
Implementation of the Binary Multiplier on CPLD Using Reversible Logic GatesIOSRJECE
The document discusses the implementation of a binary multiplier on a CPLD using reversible logic gates. It begins by introducing reversible logic gates and describes common reversible gates like the Toffoli gate. It then proposes a novel 4x4 reversible gate called the TSG gate. The document outlines the design of a reversible binary multiplier architecture using these reversible gates. Specifically, it describes generating partial products in parallel using Fredkin gates and then merging them using reversible adders. Simulation results showing the design of a 4x4 bit reversible multiplier are also presented. In conclusion, the document discusses how this CPLD implementation of a reversible binary multiplier using novel TSG gates lays the foundation for more complex reversible systems with applications in quantum computing.
Implementation and Comparison of Efficient 16-Bit SQRT CSLA Using Parity Pres...IJERA Editor
In Very Large Scale Integration (VLSI) outlines, Carry Select Adder (CSLA) is one of the quickest adder utilized as a part of numerous data processing processors to perform quick number crunching capacities. In this paper we proposed the design of SQRT CSLA using parity preserving reversible gate (P2RG). Reversible logic is emerging field in today VLSI design. In conventional circuits, the logic gates such as AND gate, OR gate is irreversible in nature and computing with irreversible logic results in energy dissipation. This problem can be circumvented by using reversible logic. In ideal condition, the reversible logic gate produces zero power dissipation. The proposed design is efficient in terms of delay as compare to irreversible SQRT CSLA. The simulation is done using Xilinx.
Design of Reversible Sequential Circuit Using Reversible Logic SynthesisVLSICS Design
Reversible logic is one of the most vital issue at present time and it has different areas for its application, those are low power CMOS, quantum computing, nanotechnology, cryptography, optical computing, DNA computing, digital signal processing (DSP), quantum dot cellular automata, communication, computer graphics. It is not possible to realize quantum computing without implementation of reversible logic. The main purposes of designing reversible logic are to decrease quantum cost, depth of the circuits and the number of garbage outputs. In this paper, we have proposed a new reversible gate. And we have designed RS flip flop and D flip flop by using our proposed gate and Peres gate. The proposed designs are better than the existing proposed ones in terms of number of reversible gates and garbage outputs. So, this realization is more efficient and less costly than other realizations.
Power Optimization using Reversible Gates for Booth’s MultiplierIJMTST Journal
Reversible logic attains the attraction of researchers in the last decade mainly due to low-power dissipation. Designers’ endeavours are thus continuing in creating complete reversible circuits consisting of reversible gates. This paper presents a design methodology for the realization of Booth’s multiplier in reversible mode. So that power is optimised Booth’s multiplier is considered as one of the fastest multipliers in literature and we have shown an efficient design methodology in reversible paradigm. The proposed architecture is capable of performing both signed and unsigned multiplication of two operands without having any feedbacks, whereas existing multipliers in reversible mode consider loop which is strictly prohibited in reversible logic design. Theoretical underpinnings, established for the proposed design, show that the proposed circuit is very efficient from reversible circuit design point of view.
Hamming net based Low Complexity Successive Cancellation Polar DecoderRSIS International
This paper aims to implement hybrid based Polar
encoder using the knowledge of mutual information and channel
capacity. Further a Hamming weight successive cancellation
decoder is simulated with QPSK modulation technique in
presence of additive white gaussian noise. The experimentation
performed with the effect of channel polarization has shown that
for 256- bit data stream, 30% channels has zero bit and 49%
channels are with a one bit capacity. The decoding complexity is
reduced to almost half as compared to conventional successive
cancellation decoding algorithm. However, the required SNR of
7 dB is achieved at the targeted BER of 10 -4. The penalty paid is
in terms of training time required at the decoding end.
International Journal of Engineering and Science Invention (IJESI) inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
The document proposes a new optimized design for a binary coded decimal (BCD) adder using reversible logic gates. It summarizes the basic definitions of reversible logic and describes commonly used reversible gates like CNOT, Toffoli, Peres, TR, and MTSG gates. It then presents the conventional design of a BCD adder and proposes a new design using MTSG gates that has lower quantum cost, fewer gates, and less delay compared to existing designs. The proposed 4-bit reversible BCD adder requires only 10 gates and has a quantum cost of 40.
Evolution of Structure of Some Binary Group-Based N-Bit Compartor, N-To-2N De...VLSICS Design
Reversible logic has attracted substantial interest due to its low power consumption which is the main
concern of low power VLSI systems. In this paper, a novel 4x4 reversible gate called inventive gate has
been introduced and using this gate 1-bit, 2-bit, 8-bit, 32-bit and n-bit group-based reversible comparator
have been constructed with low value of reversible parameters. The MOS transistor realizations of 1-bit, 2-
bit, and 8-bit of reversible comparator are also presented and finding power, delay and power delay
product (PDP) with appropriate aspect ratio W/L. Novel inventive gate has the ability to use as an n-to-2n
decoder. Different novel reversible circuit design style is compared with the existing ones. The relative
results shows that the novel reversible gate wide utility, group-based reversible comparator outperforms
the present style in terms of number of gates, garbage outputs and constant input.
This document presents the design and implementation of an FPGA-based BCH decoder. It discusses BCH codes, which are binary error-correcting codes used in wireless communications. The implemented decoder is for a (15, 5, 3) BCH code, meaning it can correct up to 3 errors in a block of 15 bits. The decoder uses a serial input/output architecture and is implemented using VHDL on a FPGA device. It performs BCH decoding through syndrome calculation, running the Berlekamp-Massey algorithm to solve the key equation, and using Chien search to find error locations. The simulation result verifies correct decoding operation.
A NEW DESIGN TECHNIQUE OF REVERSIBLE BCD ADDER BASED ON NMOS WITH PASS TRANSI...VLSICS Design
In this paper, we have proposed a new design technique of BCD Adder using newly constructed reversible gates are based on NMOS with pass transistor gates, where the conventional reversible gates are based on CMOS with transmission gates. We also compare the proposed reversible gates with the conventional CMOS reversible gates which show that the required number of Transistors is significantly reduced.
A method to determine partial weight enumerator for linear block codesAlexander Decker
This document presents a method to determine partial weight enumerators (PWE) for linear block codes using the error impulse technique and Monte Carlo method. The PWE can be used to compute an upper bound on the error probability of the maximum likelihood decoder. As an application, the document provides PWEs and analytical performances of shortened BCH codes, including BCH(130,66), BCH(103,47), and BCH(111,55). The full weight distributions of these codes are unknown. The proposed method estimates the PWE by drawing random codewords and computing the recovery rate of known-weight codewords, obtaining the PWE within a confidence interval.
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...ijcisjournal
In the present work, a low-complexity Digit-Serial/parallel Multiplier over Finite Field is proposed. It is
employed in applications like cryptography for data encryption and decryptionto deal with discrete
mathematical andarithmetic structures. The proposedmultiplier utilizes a redundant representation because
of their free squaring and modular reduction. The proposed 10-bit multiplier is simulated and synthesized
using Xilinx VerilogHDL. It is evident from the simulation results that the multiplier has significantly low
area and power when compared to the previous structures using the same representation.
An Efficient FPGA Implementation of the Advanced Encryption Standard Algorithmijsrd.com
A proposed FPGA-based implementation of the Advanced Encryption Standard (AES) algorithm is presented in this paper. This implementation is compared with other works to show the efficiency. The design uses an iterative looping approach with block and key size of 128 bits, lookup table implementation of S -box. This gives low complexity architecture and easily achieves low latency as well as high throughput. Simulation results, performance results are presented and compared with previous reported designs.
Design and implementation of address generator for wi max deinterleaver on fpgaeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Performance Improved Multipliers Based on Non-Redundant Radix-4 Signed-Digit ...IJMTST Journal
In this paper, we introduce an architecture of pre-encoded multipliers for Digital Signal Processing applications based on off-line encoding of coefficients. To this extend, the Non-Redundant radix-4 Signed-Digit (NR4SD) encoding technique, which uses the digit values {1; 0; +1; +2} or {-2; -1; 0; +1}, is proposed leading to a multiplier design with less complex partial products implementation. Extensive experimental analysis verifies that the proposed pre-encoded NR4SD multipliers, including the coefficients memory, are more area and power efficient than the conventional Modified Booth scheme.
This document contains 5 sample question papers from previous years' examinations for a Digital Electronics Circuit course. Each paper contains 4-5 questions testing various concepts in digital logic design including:
- Boolean algebra simplification and logic minimization techniques
- Code conversions (binary to gray, decimal to BCD)
- Combinational logic circuits (multiplexers, decoders, adders)
- Sequential logic circuits (latches, flip-flops, counters)
- Logic families and their characteristics (TTL, CMOS, ECL)
The document discusses using network coding to optimize routing schemes for multicasting in mobile ad-hoc networks. It defines the problem and assumptions, such as independent information sources and specifying multicast requirements. Network coding is described as employing coding at nodes rather than just relaying data. Simulations show that calculating the optimal routing scheme is much less complex with network coding compared to conventional routing, and the network coding solution uses close to the minimum possible energy.
Comparison of Turbo Codes and Low Density Parity Check CodesIOSR Journals
Abstract-The most powerful channel coding schemes, namely, those based on turbo codes and LPDC (Low density parity check) codes have in common principle of iterative decoding. Shannon’s predictions for optimal codes would imply random like codes, intuitively implying that the decoding operation on these codes would be prohibitively complex. A brief comparison of Turbo codes and LDPC codes will be given in this section, both in term of performance and complexity. In order to give a fair comparison of the codes, we use codes of the same input word length when comparing. The rate of both codes is R = 1/2. However, the Berrou’s coding scheme could be constructed by combining two or more simple codes. These codes could then be decoded separately, whilst exchanging probabilistic, or uncertainty, information about the quality of the decoding of each bit to each other. This implied that complex codes had now become practical. This discovery triggered a series of new, focused research programmes, and prominent researchers devoted their time to this new area.. Leading on from the work from Turbo codes, MacKay at the University of Cambridge revisited some 35 year old work originally undertaken by Gallagher [5], who had constructed a class of codes dubbed Low Density Parity Check (LDPC) codes. Building on the increased understanding on iterative decoding and probability propagation on graphs that led on from the work on Turbo codes, MacKay could now show that Low Density Parity Check (LDPC) codes could be decoded in a similar manner to Turbo codes, and may actually be able to beat the Turbo codes [6]. As a review, this paper will consider both these classes of codes, and compare the performance and the complexity of these codes. A description of both classes of codes will be given.
Design and implementation of log domain decoder IJECEIAES
Low-Density-Parity-Check (LDPC) code has become famous in communications systems for error correction, as an advantage of the robust performance in correcting errors and the ability to meet all the requirements of the 5G system. However, the mot challenge faced researchers is the hardware implementation, because of higher complexity and long run-time. In this paper, an efficient and optimum design for log domain decoder has been implemented using Xilinx system generator with FPGA device Kintex7 (XC7K325T-2FFG900C). Results confirm that the proposed decoder gives a Bit Error Rate (BER) very closed to theory calculations which illustrate that this decoder is suitable for next generation demand which needs a high data rate with very low BER.
A review on reversible logic gates and their implementationDebraj Maji
The document provides an overview of reversible logic gates and their implementation. It discusses how reversible logic gates can reduce power dissipation compared to irreversible logic gates. Some key reversible logic gates are described, including NOT, CNOT, Feynman, Toffoli, and Fredkin gates. Their truth tables and quantum costs are given. The document serves to introduce researchers to reversible logic gates that can be used to design more complex computing circuits with potential applications in quantum computing and low-power electronics.
Implementation of the Binary Multiplier on CPLD Using Reversible Logic GatesIOSRJECE
The document discusses the implementation of a binary multiplier on a CPLD using reversible logic gates. It begins by introducing reversible logic gates and describes common reversible gates like the Toffoli gate. It then proposes a novel 4x4 reversible gate called the TSG gate. The document outlines the design of a reversible binary multiplier architecture using these reversible gates. Specifically, it describes generating partial products in parallel using Fredkin gates and then merging them using reversible adders. Simulation results showing the design of a 4x4 bit reversible multiplier are also presented. In conclusion, the document discusses how this CPLD implementation of a reversible binary multiplier using novel TSG gates lays the foundation for more complex reversible systems with applications in quantum computing.
Implementation and Comparison of Efficient 16-Bit SQRT CSLA Using Parity Pres...IJERA Editor
In Very Large Scale Integration (VLSI) outlines, Carry Select Adder (CSLA) is one of the quickest adder utilized as a part of numerous data processing processors to perform quick number crunching capacities. In this paper we proposed the design of SQRT CSLA using parity preserving reversible gate (P2RG). Reversible logic is emerging field in today VLSI design. In conventional circuits, the logic gates such as AND gate, OR gate is irreversible in nature and computing with irreversible logic results in energy dissipation. This problem can be circumvented by using reversible logic. In ideal condition, the reversible logic gate produces zero power dissipation. The proposed design is efficient in terms of delay as compare to irreversible SQRT CSLA. The simulation is done using Xilinx.
Design of Reversible Sequential Circuit Using Reversible Logic SynthesisVLSICS Design
Reversible logic is one of the most vital issue at present time and it has different areas for its application, those are low power CMOS, quantum computing, nanotechnology, cryptography, optical computing, DNA computing, digital signal processing (DSP), quantum dot cellular automata, communication, computer graphics. It is not possible to realize quantum computing without implementation of reversible logic. The main purposes of designing reversible logic are to decrease quantum cost, depth of the circuits and the number of garbage outputs. In this paper, we have proposed a new reversible gate. And we have designed RS flip flop and D flip flop by using our proposed gate and Peres gate. The proposed designs are better than the existing proposed ones in terms of number of reversible gates and garbage outputs. So, this realization is more efficient and less costly than other realizations.
Power Optimization using Reversible Gates for Booth’s MultiplierIJMTST Journal
Reversible logic attains the attraction of researchers in the last decade mainly due to low-power dissipation. Designers’ endeavours are thus continuing in creating complete reversible circuits consisting of reversible gates. This paper presents a design methodology for the realization of Booth’s multiplier in reversible mode. So that power is optimised Booth’s multiplier is considered as one of the fastest multipliers in literature and we have shown an efficient design methodology in reversible paradigm. The proposed architecture is capable of performing both signed and unsigned multiplication of two operands without having any feedbacks, whereas existing multipliers in reversible mode consider loop which is strictly prohibited in reversible logic design. Theoretical underpinnings, established for the proposed design, show that the proposed circuit is very efficient from reversible circuit design point of view.
Hamming net based Low Complexity Successive Cancellation Polar DecoderRSIS International
This paper aims to implement hybrid based Polar
encoder using the knowledge of mutual information and channel
capacity. Further a Hamming weight successive cancellation
decoder is simulated with QPSK modulation technique in
presence of additive white gaussian noise. The experimentation
performed with the effect of channel polarization has shown that
for 256- bit data stream, 30% channels has zero bit and 49%
channels are with a one bit capacity. The decoding complexity is
reduced to almost half as compared to conventional successive
cancellation decoding algorithm. However, the required SNR of
7 dB is achieved at the targeted BER of 10 -4. The penalty paid is
in terms of training time required at the decoding end.
International Journal of Engineering and Science Invention (IJESI) inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
The document proposes a new optimized design for a binary coded decimal (BCD) adder using reversible logic gates. It summarizes the basic definitions of reversible logic and describes commonly used reversible gates like CNOT, Toffoli, Peres, TR, and MTSG gates. It then presents the conventional design of a BCD adder and proposes a new design using MTSG gates that has lower quantum cost, fewer gates, and less delay compared to existing designs. The proposed 4-bit reversible BCD adder requires only 10 gates and has a quantum cost of 40.
Evolution of Structure of Some Binary Group-Based N-Bit Compartor, N-To-2N De...VLSICS Design
Reversible logic has attracted substantial interest due to its low power consumption which is the main
concern of low power VLSI systems. In this paper, a novel 4x4 reversible gate called inventive gate has
been introduced and using this gate 1-bit, 2-bit, 8-bit, 32-bit and n-bit group-based reversible comparator
have been constructed with low value of reversible parameters. The MOS transistor realizations of 1-bit, 2-
bit, and 8-bit of reversible comparator are also presented and finding power, delay and power delay
product (PDP) with appropriate aspect ratio W/L. Novel inventive gate has the ability to use as an n-to-2n
decoder. Different novel reversible circuit design style is compared with the existing ones. The relative
results shows that the novel reversible gate wide utility, group-based reversible comparator outperforms
the present style in terms of number of gates, garbage outputs and constant input.
This document presents the design and implementation of an FPGA-based BCH decoder. It discusses BCH codes, which are binary error-correcting codes used in wireless communications. The implemented decoder is for a (15, 5, 3) BCH code, meaning it can correct up to 3 errors in a block of 15 bits. The decoder uses a serial input/output architecture and is implemented using VHDL on a FPGA device. It performs BCH decoding through syndrome calculation, running the Berlekamp-Massey algorithm to solve the key equation, and using Chien search to find error locations. The simulation result verifies correct decoding operation.
A NEW DESIGN TECHNIQUE OF REVERSIBLE BCD ADDER BASED ON NMOS WITH PASS TRANSI...VLSICS Design
In this paper, we have proposed a new design technique of BCD Adder using newly constructed reversible gates are based on NMOS with pass transistor gates, where the conventional reversible gates are based on CMOS with transmission gates. We also compare the proposed reversible gates with the conventional CMOS reversible gates which show that the required number of Transistors is significantly reduced.
A method to determine partial weight enumerator for linear block codesAlexander Decker
This document presents a method to determine partial weight enumerators (PWE) for linear block codes using the error impulse technique and Monte Carlo method. The PWE can be used to compute an upper bound on the error probability of the maximum likelihood decoder. As an application, the document provides PWEs and analytical performances of shortened BCH codes, including BCH(130,66), BCH(103,47), and BCH(111,55). The full weight distributions of these codes are unknown. The proposed method estimates the PWE by drawing random codewords and computing the recovery rate of known-weight codewords, obtaining the PWE within a confidence interval.
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...ijcisjournal
In the present work, a low-complexity Digit-Serial/parallel Multiplier over Finite Field is proposed. It is
employed in applications like cryptography for data encryption and decryptionto deal with discrete
mathematical andarithmetic structures. The proposedmultiplier utilizes a redundant representation because
of their free squaring and modular reduction. The proposed 10-bit multiplier is simulated and synthesized
using Xilinx VerilogHDL. It is evident from the simulation results that the multiplier has significantly low
area and power when compared to the previous structures using the same representation.
An Efficient FPGA Implementation of the Advanced Encryption Standard Algorithmijsrd.com
A proposed FPGA-based implementation of the Advanced Encryption Standard (AES) algorithm is presented in this paper. This implementation is compared with other works to show the efficiency. The design uses an iterative looping approach with block and key size of 128 bits, lookup table implementation of S -box. This gives low complexity architecture and easily achieves low latency as well as high throughput. Simulation results, performance results are presented and compared with previous reported designs.
Design and implementation of address generator for wi max deinterleaver on fpgaeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Performance Improved Multipliers Based on Non-Redundant Radix-4 Signed-Digit ...IJMTST Journal
In this paper, we introduce an architecture of pre-encoded multipliers for Digital Signal Processing applications based on off-line encoding of coefficients. To this extend, the Non-Redundant radix-4 Signed-Digit (NR4SD) encoding technique, which uses the digit values {1; 0; +1; +2} or {-2; -1; 0; +1}, is proposed leading to a multiplier design with less complex partial products implementation. Extensive experimental analysis verifies that the proposed pre-encoded NR4SD multipliers, including the coefficients memory, are more area and power efficient than the conventional Modified Booth scheme.
This document contains 5 sample question papers from previous years' examinations for a Digital Electronics Circuit course. Each paper contains 4-5 questions testing various concepts in digital logic design including:
- Boolean algebra simplification and logic minimization techniques
- Code conversions (binary to gray, decimal to BCD)
- Combinational logic circuits (multiplexers, decoders, adders)
- Sequential logic circuits (latches, flip-flops, counters)
- Logic families and their characteristics (TTL, CMOS, ECL)
The document discusses using network coding to optimize routing schemes for multicasting in mobile ad-hoc networks. It defines the problem and assumptions, such as independent information sources and specifying multicast requirements. Network coding is described as employing coding at nodes rather than just relaying data. Simulations show that calculating the optimal routing scheme is much less complex with network coding compared to conventional routing, and the network coding solution uses close to the minimum possible energy.
Comparison of Turbo Codes and Low Density Parity Check CodesIOSR Journals
Abstract-The most powerful channel coding schemes, namely, those based on turbo codes and LPDC (Low density parity check) codes have in common principle of iterative decoding. Shannon’s predictions for optimal codes would imply random like codes, intuitively implying that the decoding operation on these codes would be prohibitively complex. A brief comparison of Turbo codes and LDPC codes will be given in this section, both in term of performance and complexity. In order to give a fair comparison of the codes, we use codes of the same input word length when comparing. The rate of both codes is R = 1/2. However, the Berrou’s coding scheme could be constructed by combining two or more simple codes. These codes could then be decoded separately, whilst exchanging probabilistic, or uncertainty, information about the quality of the decoding of each bit to each other. This implied that complex codes had now become practical. This discovery triggered a series of new, focused research programmes, and prominent researchers devoted their time to this new area.. Leading on from the work from Turbo codes, MacKay at the University of Cambridge revisited some 35 year old work originally undertaken by Gallagher [5], who had constructed a class of codes dubbed Low Density Parity Check (LDPC) codes. Building on the increased understanding on iterative decoding and probability propagation on graphs that led on from the work on Turbo codes, MacKay could now show that Low Density Parity Check (LDPC) codes could be decoded in a similar manner to Turbo codes, and may actually be able to beat the Turbo codes [6]. As a review, this paper will consider both these classes of codes, and compare the performance and the complexity of these codes. A description of both classes of codes will be given.
Design and implementation of log domain decoder IJECEIAES
Low-Density-Parity-Check (LDPC) code has become famous in communications systems for error correction, as an advantage of the robust performance in correcting errors and the ability to meet all the requirements of the 5G system. However, the mot challenge faced researchers is the hardware implementation, because of higher complexity and long run-time. In this paper, an efficient and optimum design for log domain decoder has been implemented using Xilinx system generator with FPGA device Kintex7 (XC7K325T-2FFG900C). Results confirm that the proposed decoder gives a Bit Error Rate (BER) very closed to theory calculations which illustrate that this decoder is suitable for next generation demand which needs a high data rate with very low BER.
This document proposes a concatenated coding scheme with iterative decoding for a bit-shift channel. Specifically, it considers the serial concatenation of an outer error-correcting code and an inner modulation code, possibly preceded by an accumulator. It searches for optimal encoder mappings from an iterative decoding perspective for the inner code, which has been designed to correct single bit-shift errors and have large average power. This is important for inductively coupled channels, as the receiver gets its power from the received signal and the information should maximize the power transferred.
VHDL Design and FPGA Implementation of a High Data Rate Turbo Decoder based o...IJECEIAES
This paper presents the electronic synthesis, VHDL design and implementation on FPGA of turbo decoders for Difference Set Codes (DSC) decoded by the majority logic (ML). The VHDL design is based on the decoding Equations that we have simplified, in order to reduce the complexity and is implemented on parallel process to increase the data rate. A co-simulation using the Dsp-Builder tool on a platform designed on Matlab/Simulink, allows the measurement of the performance in terms of BER (Bit Error Rate) as well as the decoder validation. These decoders can be a good choice for future digital transmission chains. For example, for the Turbo decoder based on the product code DSC (21.11)² with a quantization of 5 bits and for one complete iteration, the results show the possibility of integration of our entire turbo decoder on a single chip, with lower latency at 0.23 microseconds and data rate greater than 500 Mb/s.
Non-Binary LDPC codes are LDPC codes where parity check equations are performed over a Galois Field GF(q) of cardinality greater than 2. This allows operations such as addition and multiplication to be defined for symbols in the field. Decoding of NB-LDPC codes can be done using belief propagation on a Tanner graph by passing messages in the form of LLRs between variable and check nodes. NB-LDPC codes have better performance than binary LDPC codes for low code rates and lengths due to their higher mutual information and lack of need for bit marginalization in decoding. However, they also have increased complexity compared to binary LDPC codes.
Mining of time series data base using fuzzy neural information systemsDr.MAYA NAYAK
This document discusses techniques for time series data mining and clustering. It introduces data mining and knowledge discovery in databases (KDD). Key techniques discussed include wavelet transforms, S-transforms, and Fourier transforms for feature extraction from time series data. Algorithms like K-means clustering and particle swarm optimization (PSO) are presented for clustering time series data based on extracted features. Hybrid approaches that combine K-means and PSO are also summarized for improved time series clustering.
Design and Implementation of Area Optimized, Low Complexity CMOS 32nm Technol...IJERA Editor
A numerically controlled oscillator (NCO) is a digital signal generator which is a very important block in many Digital Communication Systems such as Software Defined Radios, Digital Radio set and Modems, Down/Up converters for Cellular and PCS base stations etc. NCO creates a synchronous, discrete-time, discrete-valued representation of a sinusoidal waveform. This paper implements the development and design of CMOS look up Table based numerically controlled oscillator which improves the performance, reduces the power & area requirement. The design is implemented with CMOS 32 nm Technology with Microwind 3.8 software tool. In addition, it can be used for analog circuit also enables the integration of complete system on chip. This paper also describes the design of a NCO which is of contemporary nature with reasonable speed, resolution and linearity with lower power, low area. For all about Pre Layout simulation has been realized using 32nm CMOS process Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Soft Decision Scheme for Multiple Descriptions Coding over Rician Fading Chan...CSCJournals
This paper presents a new MDC scheme for robust wireless data communications. The soft detection making of the MDC scheme utilises the statistical received data error obtained from channel decoding. The coded bit stream in the system is protected using either the Reed Solomon (RS) or Low Density Parity Check Codes (LDPC) channel coding scheme. Simulation results show that this system has some significant performance improvements over the single description or single channel transmission systems in terms of symbol error rate and peak signal-to-noise ratio PSNR. The system with RS codes is 2 to 5 dB better than single description. The system with LDPC channel codes is 6 to10 dB better than the single description.
Direct digital synthesis based cordic algorithm a novel approach towards digi...eSAT Journals
Abstract Modulation is the technique in which carrier signal varies according to amplitude of modulating signal. A brilliant solution for realizing digital modulators is CORDIC (CO-ordinate Rotation Digital Computer) algorithm. Rotation mode and vector mode are two modes in which this algorithm is used. Here rotation mode is used to convert the coordinates from polar mode to rectangular mode. This paper presents the implementation of different communication subsystems like ASK, FSK, PSK, BPSK, QPSK, 4 QAM, 16 QAM that can be found in software defined radio by using CORDIC algorithm. The focus of this paper is to analysis and simulation of modulation scheme using Direct Digital Synthesizer having CORDIC algorithm. Keywords: Software Defined Radio, CORDIC algorithm, DDS, ASK, FSK, PSK, BPSK, QPSK, 4-QAM, 16-QAM.
Performance analysis of viterbi decoder for wireless applicationsacijjournal
Viterbi decoder is employed in wireless communication to decode the convolutional codes; those codes are
used in every robust digital communication systems. Convolutional encoding and viterbi decoding is a
powerful method for forward error correction. This paper deals with synthesis and implementation of
viterbi decoder with a constraint length of three as well as seven and the code rate of ½ in FPGA (Field
Programmable Gate Array). The performance of viterbi decoder is analyzed in terms of resource
utilization. The design of viterbi decoder is simulated using Verilog HDL. It is synthesized and implemented
using Xilinx 9.1ise and Spartan 3E Kit. It is compatible with many common standards such as 3GPP, IEEE
802.16 and LTE.
High Speed Memory Efficient Multiplier-less 1-D 9/7 Wavelet Filters Based NED...IJERA Editor
This document proposes a new efficient distributed arithmetic (NEDA) technique for implementing high-speed memory-efficient 1-D 9/7 wavelet filters. NEDA is an area-efficient architecture that does not require ROM, multiplication, or subtraction. It can expose redundancy in adder arrays consisting of entries of 0 and 1. The document describes how NEDA can be used to compute the high pass filter output of a 1-D discrete wavelet transform using 9/7 filters through an example. It also shows the proposed NEDA architecture and processing steps to obtain the low pass and high pass filter outputs with just additions and shifts.
This document summarizes the simulation of a turbo coded orthogonal frequency division multiplexing (OFDM) system. Key points:
1) OFDM divides a wideband channel into narrowband channels to mitigate multipath fading effects. Turbo codes are added to OFDM to improve performance at high data rates.
2) Turbo codes use parallel concatenated convolutional codes for encoding and iterative decoding. Simulation shows turbo coded OFDM outperforms uncoded OFDM with lower bit error rates over both additive white Gaussian noise and Rayleigh fading channels.
3) The simulation model includes a turbo encoder, QAM modulation, IFFT/FFT, channel with noise, turbo decoder. Results show turbo coded OFDM provides much
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...IJERA Editor
Convolutional codes are also known as Turbo codes because of their error correction capability. These codes are
also awarded as Super product codes, because these codes have replaced the backward error correction codes.
Turbo codes are much more efficient than previous backward error correction codes because these are Forward
error correction (FEC) codes and there is no need for a feedback link to request the transmitter for
retransmission of data, when bits are corrupted in the information channel. A Viterbi decoder decodes stream of
digital data bits that has been encoded by Convolutional encoder. In this paper we introduce a RSC (Recursive
Systematic Convolutional) encoder with constraint length of 2 code rate of 1/3. The RSC encoder and Viterbi
decoder both are implemented on paper, as well as in MATLAB. Simulation results are also presented by using
MATLAB.
High Speed Decoding of Non-Binary Irregular LDPC Codes Using GPUs (Paper)Enrique Monzo Solves
Implementation of a high speed decoding of non-binary irregular LDPC codes using CUDA GPUs.
Moritz Beermann, Enrique Monzó, Laurent Schmalen, Peter Vary
IEEE SiPS, Oct. 2013, Taipei, Taiwan
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document describes the implementation of a turbo encoder and turbo decoder on a TMS320C6713 digital signal processor. Turbo codes are used for error correction in wireless communication systems. A turbo encoder consists of two recursive systematic convolutional encoders connected in parallel by an interleaver. The document implements a turbo encoder that encodes input bits and a turbo decoder that uses the iterative soft-output Viterbi algorithm to decode the encoded bits. The implementation is optimized for execution time and memory usage on the TMS320C6713 DSP. Simulation results validate the DSP implementation over a Rayleigh fading channel.
This document discusses the demodulation of differential binary phase shift keying (DPSK) using the VDSP++ 4.5 software and STEL-2110A chip circuitry. It describes the DPSK modulation technique and how a DPSK signal is generated. It then explains the demodulation process which involves multiplying the received signal with a delayed version, and integrating the output using a synchronous demodulator. The implementation uses the STEL-2110A chip which contains components like accumulators, timing discriminators, and numerically controlled oscillators to perform timing recovery and extract the transmitted data bits. Simulation results using the VDSP++ software and MATLAB generated test signals are also presented.
Decoding of the extended Golay code by the simplified successive-cancellation...TELKOMNIKA JOURNAL
This paper describes an adaptation of a polar code decoding technique in favor of the extended Golay code. Based on the bridge provided by a permutation matrix between the code words of these two classes of codes, the Golay code can be decoded by any polar code technique. Contrary to the successive-cancellation list technique which is characterized by a serial estimation of the bits, we propose in this work an adaptation of the simplified successive-cancellation list technique to polar codes equivalent to the Golay code. The simulations have achieved the performance of a maximum likelihood decoding, with the low decoding complexity of polar codes, compared to one of the universal decoders of linear codes most known in the literature.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This paper proposes a frame-dependent fuzzy channel compensation (FD-FCC) method for speech recognition over time-varying telephone channels. The method uses a two-stage bias subtraction process. First, it chooses the word model that best matches the input utterance using maximum likelihood estimation. Then it derives a set of mixture biases by averaging cepstral differences between the input and chosen model. In the second stage, instead of using a single bias, it calculates a frame-dependent bias for each input frame as a convex combination of mixture biases weighted by a fuzzy membership function. Experimental results show this method can effectively cancel channel effects even with additive background noise in telephone speech recognition systems.
This document proposes an adaptive signal limiter (ASL) to smooth spectral features of reference models and test speech for noisy speech recognition. The ASL adaptively adjusts the smoothing factor on a frame-by-frame basis according to the signal-to-noise ratio (SNR), in order to reduce feature variability in noisy conditions while preserving important information in clean segments. Experimental results show the ASL achieves significant improvement in recognition accuracy in noisy environments over a wider range of SNR values compared to a hard limiter, which uses a fixed smoothing factor.
The document proposes a weighted filter bank analysis (WFBA) scheme to derive robust mel frequency cepstral coefficients (MFCCs) for speech recognition. The WFBA emphasizes the peaks of log filter bank energies while attenuating lower energies. Two weighting functions are investigated. Experimental results on a Mandarin speech database show the WFBA-based features have better discriminative ability and provide higher syllable recognition rates than standard MFCCs and other schemes in noisy and channel-distorted conditions. The direct WFBA requires less computation than an alternative using a fuzzy membership weighting function.
This document presents an investigation into using adaptive filter bank analysis (AFBA) to derive robust mel frequency cepstral features for noisy speech recognition. AFBA adaptively incorporates the signal-to-noise ratio (SNR) value into filter bank analysis by making the weighting factor for each log filter bank energy component dependent on the SNR frame-by-frame. Experimental results on a Mandarin speech database show AFBA provides higher recognition rates than other techniques in various noisy conditions.
This document presents a method for improving pitch extraction from noisy speech signals using a fuzzy weighted autocorrelation function (FWS-ACF). Simulation results show the FWS-ACF method provides better robustness against background noise than conventional autocorrelation and other pitch extraction methods. The FWS assigns membership values between 0-1 to emphasize true peaks in the autocorrelation function. Results show the FWS-ACF achieves lower gross pitch error rates than other methods like cepstrum, average magnitude difference function, and weighted autocorrelation when extracting pitch from noisy speech.
This document presents a new method for creating robust speech features based on linear predictive coding (LPC) for noisy speech recognition. The method applies a weighted arcsine transform to the autocorrelation sequence (ACS) of each speech frame. This transform uses an SNR-dependent smoothing factor to more heavily smooth segments with lower SNR. It also weights each ACS component by the inverse of the average magnitude difference function (AMDF) to emphasize spectral peaks. Experimental results on Mandarin digit recognition show the new LPC features are more noise robust than conventional LPC features over a wide range of SNRs.
This document summarizes an article originally published in Elsevier that discusses modeling bit-level stochastic correlation for turbo decoding. The key points are:
- Turbo decoding neglects correlation between systematic and parity bits introduced during encoding, which can impact performance.
- The document proposes modeling this correlation to better approximate the underlying correlation within received codewords.
- By adjusting the correlation model parameter, various degrees of correlation can be captured. This may improve bit error rate at a small increase in complexity compared to conventional turbo decoding.
This document discusses methods for improving noisy speech recognition using state duration modeling in hidden Markov models (HMMs). It reviews existing state duration modeling methods, including non-parametric methods that directly estimate duration distributions from training data and parametric methods that model duration distributions using functions like Poisson, gamma, Gaussian distributions. The document then proposes a new method called proportional alignment decoding (PAD) that retrains HMMs using state duration distributions to make the models more robust to noise. An experiment on multi-speaker Mandarin digit recognition demonstrates the new PAD method outperforms existing state duration modeling methods in noisy conditions.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
129966862758614726[1]
1. Normalization of Output Information for
a Turbo Decoder Using SOVA
Yi-Nan Lin and Wei-Wen Hung*
SUMMARY
It has been shown that the output information produced by the soft output Viterbi
algorithm (SOVA) is too optimistic. To compensate for this, the output information
should be normalized. This letter proposes a simple normalization technique that
extends the existing sign difference ratio (SDR) criterion. The new normalization
technique counts the sign differences between the a-priori information and the extrinsic
information, and then adaptively determines the corresponding normalization factor for
each data block. Simulations comparing the new technique with other well-known
normalization techniques show that the proposed normalization technique can achieve
about 0.2 dB coding gain improvement on average while reducing up to about 21
iteration for decoding.
Index Terms—Normalization of output information, Soft output Viterbi algorithm
(SOVA), Sign difference ratio (SDR), Coding gain improvement.
*
Corresponding author. Tel/fax : +886 02 29061780.
2. 1
1. Introduction
In recent years, considerable interest has been devoted to turbo codes [1] that achieve
near-Shannon limit performance. At the receiver, decoding can be done in an iterative way
either using the maximum a-posteriori (MAP) algorithm or the soft-output Viterbi algorithm
(SOVA). The SOVA is less complex, but has a degradation of about 0.7 dB compared to the
MAP. It has been found that the output information produced by a SOVA decoder does not
correctly predict the a posteriori probability (APP) of the hard decision for bad channels. In
fact, the output information is too optimistic, and thus a correction of the output information
is necessary. To compensate for this, it is suggested [2]-[3] to multiply the extrinsic
information at the output of a SOVA decoder by a set of constant normalization factors.
Pyndiah et al. fixed the evolution of the normalization factor with the iteration number to the
following values (referred to as Type 1)
⎥
⎦
⎤
⎢
⎣
⎡
⋅⋅⋅⋅⋅=
⎥
⎦
⎤
⎢
⎣
⎡
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅=
;0.1;0.1;0.1;8.0;6.0;4.0;2.0
;;;; )(
1
)2(
1
)1(
11
i
TypeTypeTypeType CCCC
(1)
where i is the index of iteration number. Z. Wang and K. K. Parhi [4] also indicated that the
number of matching bits between the signs of the a-priori information and the extrinsic
information for all bits within a decoding block is highly related to the computed factor for
normalizing the extrinsic information. Therefore, a simple mapping function (referred to as
Type 2) was used to compute the target normalization factor
[ ]⎪⎩
⎪
⎨
⎧
>−⋅∆+
≤
=
)(
)(
)(
)(
0
)(
)(
0)(
2
C THb
i
bTHb
i
b
THb
i
bi
Type
MMifMMC
MMifC
C
(2)
where 0C is the base value, C∆ is the increment, )(i
bM is the number of matching bits
within a data block in the i th iteration for decoding and )(THbM is a pre-determined
matching threshold. In addition, L. Papke and P. Robertson [5] used Gaussian distribution to
3. 2
approximate the probability density function of SOVA output v and compute its conditional
log-likelihood ratio (LLR). They concluded that the SOVA output v has to be multiplied by
the factor (referred to as Type 3)
[ ]2)(
)(
)(
3
2
i
v
i
vi
Type
m
C
σ
⋅
= . (3)
where )(i
vm and )(i
vσ are the expectation and standard deviation of output v in the i th
iteration, respectively.
In this letter, we shall deal with the normalization problem of output information for a
SOVA decoder and will present a new normalization technique. This letter is structured as
follows. Section 2 describes the formulation and block diagram of the proposed new
normalization technique. Simulation results are illustrated and discussed in Section 3. Finally,
conclusions are made in Section 4.
2. The SDR-Based Normalization Technique
For simplicity, we consider a turbo code that consists of two identical Recursive
Systematic Convolutional (RSC) codes with feedback. Let }1,{ Nkuk ≤≤ be a data block of
length N to be transmitted. At the decoder module, two soft-input soft-output (SISO) SOVA
decoders are employed to produce the estimates }1,ˆ{ Nkuk ≤≤ . Let s
ky , p
ky 1, and p
ky 2, be
the received systematic signal and parity signals corresponding to the transmitted bits ku ,
respectively; let cL be the channel reliability. Then, in the i th iteration, the first SOVA
decoder receives the channel sequence ),( 1,
p
kc
s
kc yLyL ⋅⋅ from the first encoder and the
a-priori information )ˆ()(
1, k
i
a uΛ provided by de-interleaving the extrinsic information
)ˆ()1(
2, k
i
e u−
Λ of the second SOVA decoder in the )1( −i th iteration, and hence it can produce an
improved a-posteriori information )ˆ()(
1 k
i
uΛ . Next, the second SOVA decoder comes into
4. 3
operation. It uses the interleaved channel sequence ),~( 2,
p
kc
s
kc yLyL ⋅⋅ from the second
encoder and the a-priori information )ˆ()(
2, k
i
a uΛ derived by interleaving the extrinsic
information )ˆ()(
1, k
i
e uΛ of the first SOVA decoder to calculate the a-posteriori information
)ˆ()(
2 k
i
uΛ . Above iterative process continues, and on average the BER of the decoded bits
decreases as the number of decoding iterations increases. It is shown in [6] that
)ˆ()ˆ()ˆ( )(
1,
)(
1,
)(
1 k
i
ek
i
a
s
kck
i
uuyLu Λ+Λ+⋅=Λ (4)
)ˆ()ˆ(~)ˆ( )(
2,
)(
2,
)(
2 k
i
ek
i
a
s
kck
i
uuyLu Λ+Λ+⋅=Λ . (5)
The iterative process is implemented by setting
[ ])ˆ()ˆ( )1(
2,
)(
1, k
i
ek
i
a uDeInteru −
Λ=Λ (6)
[ ])ˆ()ˆ( )(
1,
)(
2, k
i
ek
i
a uInteru Λ=Λ (7)
where ][Inter and ][DeInter denote the interleaving and de-interleaving operations,
respectively.
As described earlier in Section 1, the output information of a SOVA decoder need to be
normalized in order to obtain a more accurate LLR. Based on this fact, the new normalization
technique (referred to as Type 4) extends the existing sign difference ratio (SDR) technique [7]
to compensate the associated soft outputs of a SOVA decoder. The block diagram of an
iterative SOVA decoder employing the SDR-based normalization technique is shown in
Figure 1. First, we compute the values of SDR function (SDRF) of the k th estimated bit kuˆ
in the i th iteration for the SOVA decoder 1 and 2
⎪⎩
⎪
⎨
⎧
<Λ⋅Λ
=
elsewhere
uuif
NuSDRF k
i
ek
i
a
k
i
0
0)ˆ()ˆ(
1
)ˆ(
)(
1,
)(
1,)(
1
(8)
and
5. 4
⎪⎩
⎪
⎨
⎧
<Λ⋅Λ
=
elsewhere
uuif
NuSDRF k
i
ek
i
a
k
i
0
0)ˆ()ˆ(
1
)ˆ(
)(
2,
)(
2,)(
2
(9)
where N is the block length, )ˆ()(
1, k
i
a uΛ , )ˆ()(
1, k
i
e uΛ and )ˆ()(
2, k
i
a uΛ , )ˆ()(
2, k
i
e uΛ represent the
normalized versions of )ˆ()(
1, k
i
a uΛ , )ˆ()(
1, k
i
e uΛ and )ˆ()(
2, k
i
a uΛ , )ˆ()(
2, k
i
e uΛ respectively. Then,
the normalization factors )(
4,1
i
TypeC and )(
4,2
i
TypeC associated with the data block decoded by
the SOVA decoders 1 and 2 in the i th iteration can be formulated as
∑
=
−=
N
k
k
ii
Type uSDRFC
1
)(
2
)(
4,1 )ˆ(0.1 (10)
and
∑
=
−=
N
k
k
ii
Type uSDRFC
1
)(
1
)(
4,2 )ˆ(0.1 . (11)
Based on the observations made from repeated simulations, Yufei Wu et al. [7] speculated that
the terms ∑
=
N
k
k
i
uSDRF
1
)(
1 )ˆ( and ∑
=
N
k
k
i
uSDRF
1
)(
2 )ˆ( tend toward zero for a “good” (easy to
decode) data block whereas stay high for a “bad” (hard to decode) data block as the decoding
proceeds. Apparently, the normalization factors we employed for compensating the output
information of SOVA decoders are highly related to the underlying channel conditions and
will be updated from iteration to iteration. To incorporate the normalization factors )(
4,1
i
TypeC
and )(
4,2
i
TypeC into the architecture of a SOVA decoder, the iterative process described in
equations (6) and (7) should be rewritten as
[ ])ˆ()ˆ( )1(
2,
)1(
4,1
)(
1, k
i
e
i
Typek
i
a uDeInterCu −−
Λ⋅=Λ (12)
[ ])ˆ()ˆ( )(
1,
)(
4,2
)(
2, k
i
e
i
Typek
i
a uInterCu Λ⋅=Λ (13)
with the initial condition 5.0)0(
4,1 =TypeC . This initial value is determined by the experimental
results made from repeated simulations. In addition, the soft channel inputs s
kc yL ⋅ , p
kc yL 1,⋅
6. 5
for decoder 1 and s
kc yL ~⋅ , p
kc yL 2,⋅ for decoder 2 also need to be normalized. Thus, the
equations (4) and (5) used to compute the a-posteriori information )ˆ()(
1 k
i
uΛ and )ˆ()(
2 k
i
uΛ
should be modified as
)ˆ()ˆ(
2
0.3
)ˆ( )(
1,
)(
1,
)1(
4,1)(
1 k
i
ek
i
a
s
kc
i
Type
k
i
uuyL
C
u Λ+Λ+⋅⋅
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛ −
=Λ
−
(14)
)ˆ()ˆ(~
2
0.3
)ˆ( )(
2,
)(
2,
)(
4,2)(
2 k
i
ek
i
a
s
kc
i
Type
k
i
uuyL
C
u Λ+Λ+⋅⋅
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛ −
=Λ . (15)
3. Simulation Results and Discussions
In this section, we conduct a series of experiments to evaluate the effectiveness of the
SDR-based normalization technique we proposed for SOVA decoding. In the simulation, a
data block of 1000 bits (N=1000) and 2000 bits (N=2000) are both considered and 50,000
data blocks are transmitted. Two kinds of encoding schemes are under investigation. One is
“two 8-state (memory size M=3) RSC constituent encoders with generator polynomials
(13,15)” and another is “two 4-state (memory size M=2) RSC constituent encoders with
generator polynomials (7,5)”. The encoders are linked together by a pseudorandom interleaver.
The overall code rate is 21 . The coded bits are modulated using binary phase shift keying
(BPSK) and white Gaussian noise with a double-sided power spectral density of 20N is
added to the modulated signal. At the decoder, only eight iterations are carried out, as no
significant improvement in performance is obtained with a higher number of iterations. In
addition, the parameters used for “Type 2” normalization are
(N=1000, 980)( =THbM , 8.00 =C , 01.0=∆C ) and
(N=2000, 1960)( =THbM , 8.00 =C , 005.0=∆C ).
The results of the simulations are shown in Figures 2 ~ 4 and Figures 5 ~ 7. Figures 2
and 3 show the BER and FER plotted versus 0NEb for a SOVA decoder employing various
7. 6
normalization techniques and the encoding parameters are M=3, N=1000. The performance of
the case without normalization is also plotted for reference. As can be seen that for low
0NEb region ( dBNEb 4.20 < ), the BER and FER of a SOVA decoder can be improved by
means of various normalization techniques we discussed. Above all, the proposed SDR-based
approach performs better than the other techniques for normalization. When the channel is
less distorted ( dBNEb 4.20 > ), there is no any difference in performance between the cases
with and without normalization. Figure 4 shows the average number of iterations plotted
versus 0NEb for the same cases illustrated in Figures 2 and 3. In this figure, the decoding
is terminated when the number of sign changes between the a-priori information and the
extrinsic information within a data block is less than ( N×01.0 ). From this figure we can
observe that for most of the 0NEb region we simulated, the presented technique requires
less number of iterations than the other techniques. Also, the new technique uses only N
binary additions of sign bits and a counter no longer than N to compute the target
normalization factor. Obviously, the complexity of our approach for normalizing the output
information of a SOVA decoder is significantly less than the complexity of extra iterations for
decoding. Figures 5 and 6 show the BER and FER plotted versus 0NEb for a SOVA
decoder employing various normalization techniques in which M=2 and N=2000. Figure 7
shows the average number of iterations plotted versus 0NEb for the different
normalization schemes with M=2 and N=2000. By comparing the figures illustrated in
Figures 2~4 (for the case M=3, N=1000) and Figures 5~7 (for the case M=2, N=2000), we
can observe that all the normalization techniques we evaluated have similar relative
performances even with different encoding schemes and block sizes. Apparently, the proposed
SDR-based normalization technique is effective and easy to implement under various
conditions.
8. 7
4. Conclusion
In this letter, we proposed a new normalization technique and compared it with some
widely used normalization techniques. The idea of our approach is first to check the sign
consistency between the a-priori information and the extrinsic information, and then compute
the corresponding sign difference ratio so as to normalize the output information of a SOVA
decoder. It has been shown by simulations that the SDR-based normalization technique
achieves better performance in terms of BER, FER and the average number of iterations than
the other normalization techniques we discussed.
9. 8
Acknowledgment
This research has been partially sponsored by the National Science Council, Taiwan, ROC,
under contract number NSC-95-2221-E-131-015-. Moreover, The authors would like to thank
Dr. Erl-Huei Lu for his very helpful suggestions and comments.
Authors
Yi-Nan Lin received his B.S. degree from the Electrical Engineering Department of National Taiwan Institute
of Technology in 1989, and the M.S. degree in Computer Science & Engineering from the Yuan Ze University in
2000. He joined the Department of Electrical Engineering at Mingchi University of Technology, Taishan, Taiwan,
in 1990. He is now a lecturer in the Department of Electronic Engineering. He is also a Ph.D. candidate in the
Electrical Engineering Department of Chang Gung University, Taoyuan, Taiwan. His current research interests
include error-control coding, and digital transmission systems.
Wei-Wen Hung received his B.S. degree from the Electrical Engineering Department of Tatung Institute of
Technology in 1986, and the M.S. and Ph.D. degrees in electrical engineering from the National Tsinghua
University in 1988 and 2000, respectively. He joined the Department of Electrical Engineering at Mingchi
University of Technology, Taishan, Taiwan, in 1990. He was the Vice Dean of Student Affairs from 2000 to 2002.
He was also the Chairman of Department of Electronic Engineering in 2003. He is now a professor in the
Department of Electronic Engineering. His current research interests include speech signal processing, wireless
communication and embedded system design.
10. 9
References
[1] C. Berrou and A. Glavieux, “Near-optimum error-correcting coding and decoding :
Turbo codes,” IEEE Trans. Commun., vol. 44, pp. 1261-1271, Oct. 1996.
[2] R. M. Pyndiah, “Near-optimum decoding of product codes : block turbo codes,” IEEE
Trans. Commun., vol. 46, no. 8, pp. 1003-1010, Aug. 1998.
[3] D. W. Kim, T. W. Kwon, J. R. Choi and J. J. Kong, “A modified two-step SOVA-based
turbo decoder with a fixed scaling factor,” in Proc. IEEE Int. Symposium on Circuits and
Systems (ISCAS), pp. IV-37 ~ IV40, May 2000.
[4] Z. Wang and K. K. Parhi, “High performance, high throughput turbo/SOVA decoder
design,” IEEE Trans. Commun., vol. 51, no. 4, pp. 570-579, Apr. 2003.
[5] L. Papke and P. Robertson, “Improved decoding with the SOVA in a parallel
concatenated (turbo-code) scheme,” in Proc. IEEE Int. Conf. Communication, pp.
102-106, 1996.
[6] J. Hagenauer, E. Offer, and L. Papke, “Iterative decoding of binary block and
convolutional codes,” IEEE Trans. Inform. Theory, vol. 42, pp. 429-445, Mar. 1996.
[7] Y. Wu, B. D. Woerner, and W. J. Ebel, “A simple stopping criterion for turbo decoding,”
IEEE Commun. Letters, vol. 4, no. 8, pp. 258-260, Aug. 2000.
11. 10
Figure Captions
Fig. 1. Block diagram of an iterative SOVA decoder employing the SDR-based normalization
technique.
Fig. 2. BER versus 0NEb for a SOVA decoder employing various normalization techniques
(encoding parameters M=3, N=1000).
Fig. 3. FER versus 0NEb for a SOVA decoder employing various normalization techniques
(encoding parameters M=3, N=1000).
Fig. 4. Average number of iterations versus 0NEb for a SOVA decoder employing various
normalization techniques (encoding parameters M=3, N=1000).
Fig. 5. BER versus 0NEb for a SOVA decoder employing various normalization techniques
(encoding parameters M=2, N=2000).
Fig. 6. FER versus 0NEb for a SOVA decoder employing various normalization techniques
(encoding parameters M=2, N=2000).
Fig. 7. Average number of iterations versus 0NEb for a SOVA decoder employing various
normalization techniques (encoding parameters M=2, N=2000).
12. 11
Fig. 1
s
kC yL ⋅
p
1,kC yL ⋅
p
2,kC
yL ⋅
s
kC y~L ⋅
)uˆ( k
)i(
2,aΛ
)uˆ( k
)i(
2Λ )uˆ( k
)i(
2,eΛ
)uˆ( k
)i(
1,eΛ
)1i(
4Type,1
C −
2
C3.0 )1i(
4Type,1
−
−
2
C0.3 )i(
4Type,2−
∑−
=
N
1k
k
)i(
1
)uˆ(SDRF1
)uˆ( k
)i(
1Λ
)uˆ( k
)i(
1,aΛ
kuˆ
)i(
4Type,2C
∑−
=
N
1k
k
)i(
2
)uˆ(SDRF1
Fig. 2