Low-Density-Parity-Check (LDPC) code has become famous in communications systems for error correction, as an advantage of the robust performance in correcting errors and the ability to meet all the requirements of the 5G system. However, the mot challenge faced researchers is the hardware implementation, because of higher complexity and long run-time. In this paper, an efficient and optimum design for log domain decoder has been implemented using Xilinx system generator with FPGA device Kintex7 (XC7K325T-2FFG900C). Results confirm that the proposed decoder gives a Bit Error Rate (BER) very closed to theory calculations which illustrate that this decoder is suitable for next generation demand which needs a high data rate with very low BER.
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Convolutional encoding with Viterbi decoding is a good forward error correction technique suitable for channels affected by noise degradation. Fangled Viterbi decoders are variants of Viterbi decoder (VD) which decodes quicker and takes less memory with no error detection capability. Modified fangled takes it a step further by gaining one bit error correction and detection capability at the cost of doubling the computational complexity and processing time. A new efficient fangled Viterbi algorithm is proposed in this paper with less complexity and processing time along with 2 bit error correction capabilities. For 1 bit error correction for 14 bit input data, when compared with Modified fangled Viterbi decoder, computational complexity has come down by 36-43% and processing delay was halved. For a 2 bit error correction, when compared with Modified fangled decoder computational complexity decreased by 22-36%.
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMVLSICS Design
Low Power is an extremely important issue for future mobile communication systems; The focus of this paper is to implementat turbo codes for low power solutions. The effect on performance due to variation in parameters like frame length, number of iterations, type of encoding scheme and type of the interleave in the presence of additive white Gaussian noise is studied with the floating point model. In order to obtain the effect of quantization and word length variation, a fixed point model of the application is also developed.. The application performance measure, namely bit-error rate (BER) is used as a design constraint while optimizing for power and area coverage. Low power Optimization is Performed on Implementation levels by the use of Voltage scaling. With those Techniques we can reduced the power 98.5%and Area(LUT) is 57% and speed grade is Increased .This type of Power maneger is proposed and implemented based on the timing details of the turbo decoder in the VHDL model.
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Convolutional encoding with Viterbi decoding is a good forward error correction technique suitable for channels affected by noise degradation. Fangled Viterbi decoders are variants of Viterbi decoder (VD) which decodes quicker and takes less memory with no error detection capability. Modified fangled takes it a step further by gaining one bit error correction and detection capability at the cost of doubling the computational complexity and processing time. A new efficient fangled Viterbi algorithm is proposed in this paper with less complexity and processing time along with 2 bit error correction capabilities. For 1 bit error correction for 14 bit input data, when compared with Modified fangled Viterbi decoder, computational complexity has come down by 36-43% and processing delay was halved. For a 2 bit error correction, when compared with Modified fangled decoder computational complexity decreased by 22-36%.
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMVLSICS Design
Low Power is an extremely important issue for future mobile communication systems; The focus of this paper is to implementat turbo codes for low power solutions. The effect on performance due to variation in parameters like frame length, number of iterations, type of encoding scheme and type of the interleave in the presence of additive white Gaussian noise is studied with the floating point model. In order to obtain the effect of quantization and word length variation, a fixed point model of the application is also developed.. The application performance measure, namely bit-error rate (BER) is used as a design constraint while optimizing for power and area coverage. Low power Optimization is Performed on Implementation levels by the use of Voltage scaling. With those Techniques we can reduced the power 98.5%and Area(LUT) is 57% and speed grade is Increased .This type of Power maneger is proposed and implemented based on the timing details of the turbo decoder in the VHDL model.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Implementation of Viterbi Decoder on FPGA to Improve Designijsrd.com
In the data transmissions over wireless channels are affect by attenuation, distortion, interference and noise, which affects the receiver's ability to receive correct information. Convolution coding with Viterbi decoding is a FEC technique that is particularly suited to a channel in which transmitted signal is corrupted mainly by additive white Gaussian noise (AWGN).Convolutional codes are used for error correction. They have rather good correcting capability and perform well even on very bad channels with error probabilities. Viterbi decoding is the best technique for decoding the Convolutional codes but it is limited to smaller constraint lengths. Viterbi algorithm is a well-known maximum-likelihood algorithm for decoding of convolutional codes.
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
Survey on Error Control Coding TechniquesIJTET Journal
Abstract - Error Control Coding techniques used to ensure that the information received is correct and has not been corrupted, owing to the environmental defects and noises occurring during transmission or the data read operation from Memory. Environmental interference and physical effects defects in the communication medium can cause random bit errors during data transmission. While, data corruption means that the detection and correction of bytes by applying modern coding techniques. Error control coding divided into automatic repeat request (ARQ) and forward error correction (FEC).First of all, In ARQ, when the receiver detects an error in the receiver; it requests back the sender to retransmit the data. Second, FEC deals with system of adding redundant data in a message and also it can be recovered by a receiver even when a number of errors were introduce either during the process of data transmission, or on the storage. Therefore, error detection and correction of burst errors can be obtained by Reed-Solomon code. Moreover, the Low-Density Parity Check code furnishes outstanding performance that sparingly near to the Shannon limit.
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
A 2 stage data word packet communication decoder using rate 1-by-3 viterbi de...eSAT Journals
Abstract
In the field of consumer electronics the high speed communication technology applications based on hardware and software control are playing a vital role in establishing the benchmarks for catering the operational requirements of the electronic hardware to fulfil the consumer requirements in wired and wireless communication. In the modern era of communication electronics decoding and encoding of any data(s) using high speed and low power features of FPGA devices [1] based on VLSI technology offers less area, hardware portability, data security, high speed network connectivity [2], data error removal capability, complex algorithm realization, etc. Viterbi decoder is a high rate decoder that is very commonly and effectively used method in modern communication hardware. It involves Trellis coded modulation (TCM) scheme for decoding the data. The viterbi decoder is an attempt to reduce the power, speed [1], and cost as compared to normal decoders for wired and wireless communication. The work in this paper proposes a improved data error identification probability design of Viterbi decoders for communication systems with a low power operational performance. The proposed design combines the error identification capability of the viterbi decoder with parity decoder to improve the probability of the overall system in identifying the error during the communication process. Among various functional blocks in the Viterbi decoder, both hardware complexity and decoding speed highly depends on the architecture of the Decoder. The operational blocks of viterbi decoder are combined with parity testing block to identify the error in the viterbi decoded data using parity bit. The present design proposes a multi-stage pipelined architecture of decoder. The former stage is the viterbi decoding stage and the later stage is the parity decoding stage for the identification of error in the communicated data. Any Odd number of errors occuring in the recovered data from the former decoding stage can be identified using the later decoding stage. A general solution to derive the communication using conventional viterbi decoder is also given in this paper. Implementation result of proposed design for a rate 1/3 convolutional code is compared with the conventional design. The design of proposed algorithm is simulated and synthesized successfully Xilinx ISE Tool [3] on Xilinx Spartan 3E FPGA.
Keywords: ACS (Add-Compare-Select), Convolutional Code Rate, Error probability, FPGA, Low Power, Parity Encoder, Pipelining, Trace Back, Viterbi Decoder, Xilinx ISE.
Investigating the Performance of NoC Using Hierarchical Routing ApproachIJERA Editor
The Network-on-Chip (NoC) model has appeared as a revolutionary methodology for incorporatingmany number of intellectual property (IP) blocks in a die. As said by the International Roadmap for Semiconductors (ITRS), it is must to scale down the device size. In order to reduce the device long interconnection should be avoided. For that, new interconnect patterns are need. Three-dimensional ICs are proficient of achieving superior performance, resistance against noise and lower interconnect power consumption compared to traditional planar ICs. In this paper, network data routed by Hierarchical methodology. We are analyzing total number of logic gates and registers, power consumption and delay when different bits of data transmitted using Quartus II software.
Lightweight hamming product code based multiple bit error correction coding s...journalBEEI
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other error control schemes. The low complexity and excellent residual flit error rate make the proposed code suitable for on chip interconnection links.
Tail-biting convolutional codes (TBCC) have been extensively applied in communication systems. This method is implemented by replacing the fixedtail with tail-biting data. This concept is needed to achieve an effective decoding computation. Unfortunately, it makes the decoding computation becomes more complex. Hence, several algorithms have been developed to overcome this issue in which most of them are implemented iteratively with uncertain number of iteration. In this paper, we propose a VLSI architecture to implement our proposed reversed-trellis TBCC (RT-TBCC) algorithm. This algorithm is designed by modifying direct-terminating maximumlikelihood (ML) decoding process to achieve better correction rate. The purpose is to offer an alternative solution for tail-biting convolutional code decoding process with less number of computation compared to the existing solution. The proposed architecture has been evaluated for LTE standard and it significantly reduces the computational time and resources compared to the existing direct-terminating ML decoder. For evaluations on functionality and Bit Error Rate (BER) analysis, several simulations, System-on-Chip (SoC) implementation and synthesis in FPGA are performed.
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...Journal For Research
Low density parity check code is a linear block code. This code approaches the Shannon’s limit and having low decoding complexity. We have taken LDPC (Low Density Parity Check) code with ½ code rate as an error correcting code in digital video stream and studied the performance of LDPC code with BPSK modulation in AWGN (Additive White Gaussian Noise) channel with sum product algorithm and bit flipping algorithm. Finally the plot between bit error rates of the code with respect to SNR has been considered the output performance parameter of proposed methodology. BER are considered for different number of frames and different number of iterations. The performance of the sum product algorithm and bit flip algorithm are also com-pared. All simulation work has been implemented in MATLAB.
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC Codesidescitation
Among various hard decision based Bit Flipping (BF)
algorithms for decoding Low-Density Parity-Check (LDPC)
codes such as Weighted Bit Flipping (WBF), Improved
Reliability Ratio Weighted Bit Flipping (IRRWBF) etc., the
Steepest Descent Bit Flipping Algorithm (SDBF) achieves
better error performance. In this paper, the performance of
the Steepest Descent Algorithm for both single steepest
descent and Multi steepest descent modes is analysed. Also
the performance of IEEE 802.16e standard is analysed using
Steepest Descent Bit Flipping (SDBF) decoding algorithm.
SDBF requires fewer check node and variable node operations
compared to Sum Product Algorithm (SPA) and Min Sum
Algorithm (MSA). The SDBF achieves a coding gain of 0.1 ~
0.2 dB compared to Single-SDBF without requiring complex
log and exponential operations.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
FPGA Implementation of LDPC Encoder for Terrestrial TelevisionAI Publications
The increasing data rates in digital television networks increase the demands on data capacity of the current transmission channels. Through new standards, the capacity of existing channels is increased with new methods of error correction coding and modulation. In this work, Low Density Parity Check (LDPC) codes are implemented for their error correcting capability. LDPC is a linear error correcting code. These linear error correcting codes are used for transmitting a message over a noisy transmission channel. LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over noisy channels. These codes are capable of performing near to Shannon limit performance and have low decoding complexity. LDPC uses parity check matrix for its encoding and decoding purpose. The main advantage of the parity check matrix is that it helps in detecting and correcting errors which is a very important advantage against noisy channels. This work presents the design and implementation of a LDPC encoder for transmission of digital terrestrial television according to the Chinese DTMB standard. The system is written in Verilog and is implemented on FPGA. The whole work is then verified with the help of Matlab modelling.
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is an open access journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This paper presents the bit error rate performance of the low density parity check (LDPC) with the concatenation of convolutional channel coding based orthogonal frequency-division-multiplexing (OFDM) using space time block coded (STBC). The OFDM wireless communication system incorporates 3/4rated convolutional encoder under various digital modulations (BPSK, QPSK and QAM) over an additative white gaussian noise (AWGN) and fading (Raleigh and Rician) channels. At the receiving section of the simulated system, Maximum Ratio combining (MRC) channel equalization technique has been implemented to extract transmitted symbols without enhancing noise power.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Implementation of Viterbi Decoder on FPGA to Improve Designijsrd.com
In the data transmissions over wireless channels are affect by attenuation, distortion, interference and noise, which affects the receiver's ability to receive correct information. Convolution coding with Viterbi decoding is a FEC technique that is particularly suited to a channel in which transmitted signal is corrupted mainly by additive white Gaussian noise (AWGN).Convolutional codes are used for error correction. They have rather good correcting capability and perform well even on very bad channels with error probabilities. Viterbi decoding is the best technique for decoding the Convolutional codes but it is limited to smaller constraint lengths. Viterbi algorithm is a well-known maximum-likelihood algorithm for decoding of convolutional codes.
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
Survey on Error Control Coding TechniquesIJTET Journal
Abstract - Error Control Coding techniques used to ensure that the information received is correct and has not been corrupted, owing to the environmental defects and noises occurring during transmission or the data read operation from Memory. Environmental interference and physical effects defects in the communication medium can cause random bit errors during data transmission. While, data corruption means that the detection and correction of bytes by applying modern coding techniques. Error control coding divided into automatic repeat request (ARQ) and forward error correction (FEC).First of all, In ARQ, when the receiver detects an error in the receiver; it requests back the sender to retransmit the data. Second, FEC deals with system of adding redundant data in a message and also it can be recovered by a receiver even when a number of errors were introduce either during the process of data transmission, or on the storage. Therefore, error detection and correction of burst errors can be obtained by Reed-Solomon code. Moreover, the Low-Density Parity Check code furnishes outstanding performance that sparingly near to the Shannon limit.
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
A 2 stage data word packet communication decoder using rate 1-by-3 viterbi de...eSAT Journals
Abstract
In the field of consumer electronics the high speed communication technology applications based on hardware and software control are playing a vital role in establishing the benchmarks for catering the operational requirements of the electronic hardware to fulfil the consumer requirements in wired and wireless communication. In the modern era of communication electronics decoding and encoding of any data(s) using high speed and low power features of FPGA devices [1] based on VLSI technology offers less area, hardware portability, data security, high speed network connectivity [2], data error removal capability, complex algorithm realization, etc. Viterbi decoder is a high rate decoder that is very commonly and effectively used method in modern communication hardware. It involves Trellis coded modulation (TCM) scheme for decoding the data. The viterbi decoder is an attempt to reduce the power, speed [1], and cost as compared to normal decoders for wired and wireless communication. The work in this paper proposes a improved data error identification probability design of Viterbi decoders for communication systems with a low power operational performance. The proposed design combines the error identification capability of the viterbi decoder with parity decoder to improve the probability of the overall system in identifying the error during the communication process. Among various functional blocks in the Viterbi decoder, both hardware complexity and decoding speed highly depends on the architecture of the Decoder. The operational blocks of viterbi decoder are combined with parity testing block to identify the error in the viterbi decoded data using parity bit. The present design proposes a multi-stage pipelined architecture of decoder. The former stage is the viterbi decoding stage and the later stage is the parity decoding stage for the identification of error in the communicated data. Any Odd number of errors occuring in the recovered data from the former decoding stage can be identified using the later decoding stage. A general solution to derive the communication using conventional viterbi decoder is also given in this paper. Implementation result of proposed design for a rate 1/3 convolutional code is compared with the conventional design. The design of proposed algorithm is simulated and synthesized successfully Xilinx ISE Tool [3] on Xilinx Spartan 3E FPGA.
Keywords: ACS (Add-Compare-Select), Convolutional Code Rate, Error probability, FPGA, Low Power, Parity Encoder, Pipelining, Trace Back, Viterbi Decoder, Xilinx ISE.
Investigating the Performance of NoC Using Hierarchical Routing ApproachIJERA Editor
The Network-on-Chip (NoC) model has appeared as a revolutionary methodology for incorporatingmany number of intellectual property (IP) blocks in a die. As said by the International Roadmap for Semiconductors (ITRS), it is must to scale down the device size. In order to reduce the device long interconnection should be avoided. For that, new interconnect patterns are need. Three-dimensional ICs are proficient of achieving superior performance, resistance against noise and lower interconnect power consumption compared to traditional planar ICs. In this paper, network data routed by Hierarchical methodology. We are analyzing total number of logic gates and registers, power consumption and delay when different bits of data transmitted using Quartus II software.
Lightweight hamming product code based multiple bit error correction coding s...journalBEEI
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other error control schemes. The low complexity and excellent residual flit error rate make the proposed code suitable for on chip interconnection links.
Tail-biting convolutional codes (TBCC) have been extensively applied in communication systems. This method is implemented by replacing the fixedtail with tail-biting data. This concept is needed to achieve an effective decoding computation. Unfortunately, it makes the decoding computation becomes more complex. Hence, several algorithms have been developed to overcome this issue in which most of them are implemented iteratively with uncertain number of iteration. In this paper, we propose a VLSI architecture to implement our proposed reversed-trellis TBCC (RT-TBCC) algorithm. This algorithm is designed by modifying direct-terminating maximumlikelihood (ML) decoding process to achieve better correction rate. The purpose is to offer an alternative solution for tail-biting convolutional code decoding process with less number of computation compared to the existing solution. The proposed architecture has been evaluated for LTE standard and it significantly reduces the computational time and resources compared to the existing direct-terminating ML decoder. For evaluations on functionality and Bit Error Rate (BER) analysis, several simulations, System-on-Chip (SoC) implementation and synthesis in FPGA are performed.
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...Journal For Research
Low density parity check code is a linear block code. This code approaches the Shannon’s limit and having low decoding complexity. We have taken LDPC (Low Density Parity Check) code with ½ code rate as an error correcting code in digital video stream and studied the performance of LDPC code with BPSK modulation in AWGN (Additive White Gaussian Noise) channel with sum product algorithm and bit flipping algorithm. Finally the plot between bit error rates of the code with respect to SNR has been considered the output performance parameter of proposed methodology. BER are considered for different number of frames and different number of iterations. The performance of the sum product algorithm and bit flip algorithm are also com-pared. All simulation work has been implemented in MATLAB.
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC Codesidescitation
Among various hard decision based Bit Flipping (BF)
algorithms for decoding Low-Density Parity-Check (LDPC)
codes such as Weighted Bit Flipping (WBF), Improved
Reliability Ratio Weighted Bit Flipping (IRRWBF) etc., the
Steepest Descent Bit Flipping Algorithm (SDBF) achieves
better error performance. In this paper, the performance of
the Steepest Descent Algorithm for both single steepest
descent and Multi steepest descent modes is analysed. Also
the performance of IEEE 802.16e standard is analysed using
Steepest Descent Bit Flipping (SDBF) decoding algorithm.
SDBF requires fewer check node and variable node operations
compared to Sum Product Algorithm (SPA) and Min Sum
Algorithm (MSA). The SDBF achieves a coding gain of 0.1 ~
0.2 dB compared to Single-SDBF without requiring complex
log and exponential operations.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
FPGA Implementation of LDPC Encoder for Terrestrial TelevisionAI Publications
The increasing data rates in digital television networks increase the demands on data capacity of the current transmission channels. Through new standards, the capacity of existing channels is increased with new methods of error correction coding and modulation. In this work, Low Density Parity Check (LDPC) codes are implemented for their error correcting capability. LDPC is a linear error correcting code. These linear error correcting codes are used for transmitting a message over a noisy transmission channel. LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over noisy channels. These codes are capable of performing near to Shannon limit performance and have low decoding complexity. LDPC uses parity check matrix for its encoding and decoding purpose. The main advantage of the parity check matrix is that it helps in detecting and correcting errors which is a very important advantage against noisy channels. This work presents the design and implementation of a LDPC encoder for transmission of digital terrestrial television according to the Chinese DTMB standard. The system is written in Verilog and is implemented on FPGA. The whole work is then verified with the help of Matlab modelling.
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is an open access journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This paper presents the bit error rate performance of the low density parity check (LDPC) with the concatenation of convolutional channel coding based orthogonal frequency-division-multiplexing (OFDM) using space time block coded (STBC). The OFDM wireless communication system incorporates 3/4rated convolutional encoder under various digital modulations (BPSK, QPSK and QAM) over an additative white gaussian noise (AWGN) and fading (Raleigh and Rician) channels. At the receiving section of the simulated system, Maximum Ratio combining (MRC) channel equalization technique has been implemented to extract transmitted symbols without enhancing noise power.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Implementation of Joint Network Channel Decoding Algorithm for Multiple Acces...csandit
In this paper, we consider a Joint Network Channel Decoding (JNCD) algorithm applied to a
wireless network consisting to M users. For this purpose M sources desire to send information
to one receiver by the help of an intermediate node which is the relay. The Physical Layer
Network Coding (PLNC) allows the relay to decode the combined information being sent from
different transmitters. Then, it forwards additional information to the destination node which
receives also signals from source nodes. An iterative JNCD algorithm is developed at the
receiver to estimate the information being sent from each transmitter. Simulation results show
that the Bit Error Rate (BER) can be decreased by using this concept comparing to the
reference one which doesn’t consider the network coding.
IMPLEMENTATION OF JOINT NETWORK CHANNEL DECODING ALGORITHM FOR MULTIPLE ACCES...cscpconf
In this paper, we consider a Joint Network Channel Decoding (JNCD) algorithm applied to a wireless network consisting to M users. For this purpose M sources desire to send information
to one receiver by the help of an intermediate node which is the relay. The Physical Layer Network Coding (PLNC) allows the relay to decode the combined information being sent from different transmitters. Then, it forwards additional information to the destination node which receives also signals from source nodes. An iterative JNCD algorithm is developed at the receiver to estimate the information being sent from each transmitter. Simulation results show that the Bit Error Rate (BER) can be decreased by using this concept comparing to the reference one which doesn’t consider the network coding.
Design and Implementation of an Embedded System for Software Defined RadioIJECEIAES
In this paper, developing high performance software for demanding real-time embed- ded systems is proposed. This software-based design will enable the software engineers and system architects in emerging technology areas like 5G Wireless and Software Defined Networking (SDN) to build their algorithms. An ADSP-21364 floating point SHARC Digital Signal Processor (DSP) running at 333 MHz is adopted as a platform for an embedded system. To evaluate the proposed embedded system, an implementation of frame, symbol and carrier phase synchronization is presented as an application. Its performance is investigated with an on line Quadrature Phase Shift keying (QPSK) receiver. Obtained results show that the designed software is implemented successfully based on the SHARC DSP which can utilized efficiently for such algorithms. In addition, it is proven that the proposed embedded system is pragmatic and capable of dealing with the memory constraints and critical time issue due to a long length interleaved coded data utilized for channel coding.
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
In this paper we describe novel non-statistical Euclidean distance soft-input, soft-output (SISO) decoding algorithms for the three currently most important error-correcting codes: the low-density parity-check (LDPC), turbo and polar codes. The metric is squared Euclidean distance, and the decoders operate using an antilog-log (AL) process. We have investigated the simulated bit-error rate (BER) performance of these non-statistical algorithms on three channel models: the additive White Gaussian noise (AWGN), the Rayleigh fading and Middleton’s Class-A impulsive noise channels, and compare them with the BER performances of the corresponding statistical decoding algorithms for the three codes and channels. In all cases the performance over the AWGN channel of the non-statistical algorithms is almost the same or slightly better than that of the statistical algorithms. In some cases the performance over the two nonGaussian channels of the non-statistical algorithms is worse than that of the statistical algorithms, but the use of a simple signal amplitude limiter placed before the decoder input significantly improves the actual and relative performances of the algorithms. Thus there is no performance loss, and sometimes a significant performance gain, for the proposed decoding algorithms. A major advantage of our algorithms is that estimation of the channel signal-to-noise ratio is not required, which in practice simplifies system implementation. In addition, we have found that the processing complexity of the non-statistical algorithms is similar or slightly less than that of the corresponding statistical algorithms, and is significantly less for the LDPC codes over all of the channels.
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
In this paper we describe novel non-statistical Euclidean distance soft-input, soft-output (SISO) decoding
algorithms for the three currently most important error-correcting codes: the low-density parity-check
(LDPC), turbo and polar codes. The metric is squared Euclidean distance, and the decoders operate using
an antilog-log (AL) process. We have investigated the simulated bit-error rate (BER) performance of these
non-statistical algorithms on three channel models: the additive White Gaussian noise (AWGN), the
Rayleigh fading and Middleton’s Class-A impulsive noise channels, and compare them with the BER
performances of the corresponding statistical decoding algorithms for the three codes and channels. In all
cases the performance over the AWGN channel of the non-statistical algorithms is almost the same or
slightly better than that of the statistical algorithms. In some cases the performance over the two nonGaussian channels of the non-statistical algorithms is worse than that of the statistical algorithms, but the
use of a simple signal amplitude limiter placed before the decoder input significantly improves the actual
and relative performances of the algorithms. Thus there is no performance loss, and sometimes a
significant performance gain, for the proposed decoding algorithms. A major advantage of our algorithms
is that estimation of the channel signal-to-noise ratio is not required, which in practice simplifies system
implementation. In addition, we have found that the processing complexity of the non-statistical algorithms
is similar or slightly less than that of the corresponding statistical algorithms, and is significantly less for
the LDPC codes over all of the channels.
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
In this paper we describe novel non-statistical Euclidean distance soft-input, soft-output (SISO) decoding
algorithms for the three currently most important error-correcting codes: the low-density parity-check
(LDPC), turbo and polar codes. The metric is squared Euclidean distance, and the decoders operate using
an antilog-log (AL) process. We have investigated the simulated bit-error rate (BER) performance of these
non-statistical algorithms on three channel models: the additive White Gaussian noise (AWGN), the
Rayleigh fading and Middleton’s Class-A impulsive noise channels, and compare them with the BER
performances of the corresponding statistical decoding algorithms for the three codes and channels. In all
cases the performance over the AWGN channel of the non-statistical algorithms is almost the same or
slightly better than that of the statistical algorithms. In some cases the performance over the two nonGaussian channels of the non-statistical algorithms is worse than that of the statistical algorithms, but the
use of a simple signal amplitude limiter placed before the decoder input significantly improves the actual
and relative performances of the algorithms. Thus there is no performance loss, and sometimes a
significant performance gain, for the proposed decoding algorithms. A major advantage of our algorithms
is that estimation of the channel signal-to-noise ratio is not required, which in practice simplifies system
implementation. In addition, we have found that the processing complexity of the non-statistical algorithms
is similar or slightly less than that of the corresponding statistical algorithms, and is significantly less for
the LDPC codes over all of the channels
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
n this paper we describe novel non-statistical Euclidean distance soft-input, soft-output (SISO) decoding
algorithms for the three currently most important error-correcting codes: the low-density parity-check
(LDPC), turbo and polar codes. The metric is squared Euclidean distance, and the decoders operate using
an antilog-log (AL) process. We have investigated the simulated bit-error rate (BER) performance of these
non-statistical algorithms on three channel models: the additive White Gaussian noise (AWGN), the
Rayleigh fading and Middleton’s Class-A impulsive noise channels, and compare them with the BER
performances of the corresponding statistical decoding algorithms for the three codes and channels. In all
cases the performance over the AWGN channel of the non-statistical algorithms is almost the same or
slightly better than that of the statistical algorithms.
An efficient reconfigurable code rate cooperative low-density parity check co...IJECEIAES
In recent days, extensive digital communication process has been performed. Due to this phenomenon, a proper maintenance of authentication, communication without any overhead such as signal attenuation code rate fluctuations during digital communication process can be minimized and optimized by adopting parallel encoder and decoder operations. To overcome the above-mentioned drawbacks by using proposed reconfigurable code rate cooperative (RCRC) and low-density parity check (LDPC) method. The proposed RCRC-LDPC is capable to operate over gigabits/sec data and it effectively performs linear encoding, dual diagonal form, widens the range of code rate and optimal degree distribution of LDPC mother code. The proposed method optimize the transmission rate and it is capable to operate on 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. The proposed method optimizes the transmission rate and is capable to operate on a 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. the proposed method's implementation has been carried out using MATLAB and as per the simulation result, the proposed method is capable of reaching a throughput efficiency greater than 8.2 (1.9) gigabits per second with a clock frequency of 160 MHz.
In this paper, low linear architectures for analyzing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. The min-sum giving out step is to that it produces only two diverse output magnitude values irrespective of the number of incoming bit-to check communication. These new micro-architecture structures would utilize the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption. The decoding algorithms we propose generalize and unify the decoding schemes originally presented the product codes and those of low-density parity-check codes.
Reduced Energy Min-Max Decoding Algorithm for Ldpc Code with Adder Correction...ijceronline
In this paper, high linear architectures for analysing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. We proposed the adder and LDPC. The min-sum processing step that it gives only two different output magnitude values irrespective of the number of incoming bit-to check messages. These new micro-architecture layouts would employ the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption.
Hardware Architecture of Complex K-best MIMO DecoderCSCJournals
This paper presents a hardware architecture of complex K-best Multiple Input Multiple Output (MIMO) decoder reducing the complexity of Maximum Likelihood (ML) detector. We develop a novel low-power VLSI design of complex K-best decoder for MIMO and 64 QAM modulation scheme. Use of Schnorr-Euchner (SE) enumeration and a new parameter, Rlimit in the design reduce the complexity of calculating K-best nodes to a certain level with increased performance. The total word length of only 16 bits has been adopted for the hardware design limiting the bit error rate (BER) degradation to 0.3 dB with list size, K and Rlimit equal to 4. The proposed VLSI architecture is modeled in Verilog HDL using Xilinx and synthesized using Synopsys Design Vision in 45 nm CMOS technology. According to the synthesize result, it achieves 1090.8 Mbps throughput with power consumption of 782 mW and latency of 0.33 us. The maximum frequency the design proposed is 181.8 MHz.
FPGA Implementation of Soft Output Viterbi Algorithm Using Memoryless Hybrid ...VLSICS Design
The importance of convolutional codes is well established. They are widely used to encode digital data before transmission through noisy or error-prone communication channels to reduce occurrence of errors and memory. This paper presents novel decoding technique, memoryless Hybrid Register Exchange with simulation and FPGA implementation results. It requires single register as compared to Register Exchange Method (REM) & Hybrid Register Exchange Method (HREM); therefore the data trans-fer operations and ultimately the switching activity will get reduced.
Similar to Design and implementation of log domain decoder (20)
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
2. Int J Elec & Comp Eng ISSN: 2088-8708
Design and implementation of log domain decoder (Mahmood Farhan Mosleh)
1455
the improvement and fast prototyping of the hardware-based simulator system. Also, the authors in [10, 11]
proposed a modified design of decoders which reduced the complexity of decoding LDPC codes based on
FPGA. In addition, the author in [12] implemented the LDPC decoder based Xilinx System Generator
(XSG). Furthermore, the authors in [13] designed and implemented LDPC decoder based FPGA which
reduced the complexity that can be used for applications of high data rate. Moreover, the authors in [14-18]
used FPGA technique to implement LDPC decoders with different algorithms. Finally, the author in [19]
proposed the FPGA construction of spatially coupled (SC) LDPC codes resulting from quasi-cyclic (QC)
LDPC codes.
In this paper, a design of LDPC system using the log domain algorithm will be presented using
Xilinx System Generator (XSG) which is a new and efficient technique to design many systems such as
in [20-22]. The LDPC system will be implemented using XSG with the software tools Xilinx Vivado 2017.4
and Xilinx Kintex7 (XC7K325T-2FFG900C). The purpose of this paper is to design a communication system
using LDPC code based soft decision decoder represented by the log domain algorithm with reduced
hardware design to assess its performance and to determine the extent to which the theoretical results match
the current application. The rest of this paper, the second section includes the theory of log domain algorithm
used in the decoder, the third section includes details of the design using XSG. In the fourth and fifth
sections, the XSG waveforms and synthesis reports are presented in these sections respectively. In the sixth
section, a conclusion is presented.
2. LDPC BASED ON LOG DOMAIN DECODER SYSTEM
Figure 1 shows the block diagram of LDPC system using the log-domain decoder.
Figure 1. LDPC System model based on log decoder
The LDPC encoder is implemented using the matrix method with n is 20 and m is 10 by computing
the check equations of the matrix in the systematic form 𝐻𝑠𝑦𝑠 = [𝐼 𝑚, 𝐴 𝑚×𝐾] where k= n-m as shown below:
𝐻𝑠𝑦𝑠 =
[
1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0
0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 0
0 0 0 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 1 1
0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1
0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0
0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 1
0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 0]
(1)
Where the PCM consists of identity matrix I which represents the check equations and binary matrix A which
represents the information part by depending on the binary matrix the check equations can be computed as
follows:
𝐶1 = 𝑑3 + 𝑑6 + 𝑑9 (2)
and
𝐶2 = 𝑑5 + 𝑑6 + 𝑑7 + 𝑑8 (3)
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1454 - 1468
1456
Where 𝐶1 and 𝐶2 represent the equations for first and second row respectively and so on for
the other equations and di represent the data bits where i is from 1 to 10. The details description of log
decoder is explained below.
2.1. Log decoder
The log domain algorithm is a type of Sum-Product Algorithm (SPA) which depends on passing
messages between CNs and Bit Nodes (BNs). It is like the Bit Flipping Algorithm (BFA) but the BFA takes
a prior hard decision on the received bits as inputs whereas the log domain algorithm depends on a soft
decision by taking the probability of every receiving bit as input [23]. For easier computation for SPA,
the log-likelihood ratio (LLR) of prior (which represent the receiving messages from the channel) and
posterior (which represent the medial messages moved between CNs and VNs) probability is used.
The process of the decoding contains three steps which are the initializing step, CNs processes and VNs
processes [24]. These steps are listed below [25]:
Step1 (Initialization): The prior messages sent from BN n to the CN m represent the LLR.
𝐿𝑐𝑖 𝑛 =
−4𝑟𝑖
𝑁0
(4)
𝑎𝑙𝑝ℎ𝑎𝑖𝑗 𝑚,𝑛=sign (𝐿𝑐𝑖 𝑛) (5)
𝑏𝑒𝑡𝑎𝑖𝑗 𝑚,𝑛 = | 𝐿𝑐𝑖 𝑛| (6)
where 𝐿𝑐𝑖 𝑛 denotes the LLR of the nth
bit in the mth
parity-check with for an AWGN channel with Signal to
Noise Ratio (SNR), 𝑟𝑖 is the received signal from the channel and 𝑁0 is the noise variance, 𝑎𝑙𝑝ℎ𝑎𝑖𝑗 𝑚,𝑛 and
𝑏𝑒𝑡𝑎𝑖𝑗 𝑚,𝑛 will calculate the sign and the absolute value to each 𝐿𝑐𝑖 𝑛.
Step2 (CN-to-BN messages): Computing the extrinsic messages for each set of bits connected to CN m by
excluding the bit 𝑛′.
𝑃𝑖𝑏𝑒𝑡𝑎𝑖𝑗 𝑚,𝑛 = log(
1+∏ tanh(
𝑏𝑒𝑡𝑎𝑖𝑗 𝑚,𝑛′
2
)n∈𝐵 𝑚 ,n′≠n
1−∏ tanh(
𝑏𝑒𝑡𝑎𝑖𝑗 𝑚,𝑛′
2
)n∈𝐵 𝑚 ,n′≠n
) (7)
where 𝑃𝑖𝑏𝑒𝑡𝑎𝑖𝑗 𝑛,𝑚 represents the probability that parity-check m is satisfied if bit n is supposed to be a 1 for
the LLR.
Step3(Codeword test):
𝐿 𝑄𝑖 = 𝐿𝑐𝑖 𝑛+∑ 𝑃𝑖𝑏𝑒𝑡𝑎𝑖𝑗 𝑚,𝑛m∈𝐴 𝑛
(8)
where 𝐿 𝑄𝑖 represents the collective log-likelihood ratio for nth
digit, is the addition of the extrinsic messages
and the original LLR that was calculated in the first Step.
For every bit a hard decision will be done:
𝑣ℎ𝑎𝑡 = {
1, 𝐿 𝑄𝑖 < 0
0, 𝐿 𝑄𝑖 > 0.
(9)
If vhat a valid codeword (HvhatT
= 0), or if the maximum iteration’s number are ended,
the algorithm will be terminated.
Step4(BN-to-CN messages): The message sent from every BN n to CN m that it is connected to, it is
similar to (8), except that bit 𝑛 sends to CN m a LLR computed without using the information from CN
𝑚′ [25]:
𝐿𝑞𝑖𝑗 𝑚,𝑛= 𝐿𝑐𝑖 𝑛 + ∑ 𝑃𝑖𝑏𝑒𝑡𝑎𝑖𝑗 𝑚′,𝑛m′∈𝐴 𝑛,𝑚′≠𝑚 (10)
3. XSG IMPLEMENTATION OF LDPC CODE BASED ON LOG DECODER
The system in Figure 2 is implemented using XSG. All blocks are corresponding to the blocks in
Figure 1. The detail description of each XSG component is presented:
4. Int J Elec & Comp Eng ISSN: 2088-8708
Design and implementation of log domain decoder (Mahmood Farhan Mosleh)
1457
Figure 2. LDPC system using XSG
3.1. Transmitter section
The transmitter section consists of source input, serial to parallel, LDPC encoder and modulation.
Each one of these blocks is designed using XSG in Simulink/Matlab program.
3.1.1. Source input
The block of Bernoulli Binary Generator has been used to generate the Bernoulli Binary Signal.
The performance of the suggested system has been tested through using the binary data. The setting of this
block is as the following: the sample time is 1 and the format of the resulting data is Boolean. The gateway-in
has been used in order to convert the data to unsigned format with Word Length (WL) is 1 and Fraction
Length (FL) is 0. The output from this block will be the source input to the LDPC encoder.
3.1.2. Serial to parallel block(S/P)
This block is existing in XSG tools. The bit’s number is 10 in this block and 1 is used for latency.
The resulting from this block is a symbol of 10 bits which can be converted to parallel bits. The slice block
has been used to choose a particular sort of bits from every sample in the input. There are ten slice blocks to
extract the ten bits. The number of bits in each slice is 1 and the format of the resulting data is unsigned with
WL is 2 and FL is 0. The block diagram of S/P in XSG is shown in Figure 3.
3.1.3. LDPC encoder
The LDPC encoder consists of xor gates and concat block which are available in XSG library.
The generator matrix method has been used for encoding each message. Each xor gate denotes the parity
check equation for every row of the H matrix in (1). The output of these gates will then enter to the concat
block to get a symbol of 20 bits which represent the encoded message. The block diagram of LDPC encoder
in XSG is shown in Figure 4.
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1454 - 1468
1458
3.1.4. Modulation
The modulation block consists of parallel to serial block and mapping block. Every bit from
the serial data has represented as an address to the ROM to indicate either the phase of {00
} or the phase
of {1800
}. The initial value vector for the ROM is [-1 1]. The format of the resulting data will be signed with
WL is two bits and FL is zero bit. Then delay has been linked for enabling pin of the ROM block to mask all
the serial bits till they are ready for map process. The delay is used here for synchronizing between
the current bit and the previous bit. The XSG block of modulation is shown in Figure 5.
Figure 3. S/P converter in XSG
Figure 4. Block diagram of LDPC encoder in XSG
6. Int J Elec & Comp Eng ISSN: 2088-8708
Design and implementation of log domain decoder (Mahmood Farhan Mosleh)
1459
3.2. AWGN channel
The AWGN channel has been used for testing the performance of the system, where random signals
are added with the received signals. The AWGN generator is a system generator block set used to generate
random signals with Gaussian distribution of zero mean and unity variance with WL is 18 bits and FL is 16
bits. The output of AWGN is multiplied by a constant that represents the square root of N0 then the result
from this multiplication will be added with each bit of the transmitted signal to represent the noise for
different SNR. The XSG block diagram of AWGN channel is illustrated in Figure 6.
Figure 5. Block diagram of modulation in XSG Figure 6. AWGN channel in XSG
3.3. Receiver part
The receiver section consists of the following blocks:
3.3.1. Initialization
Initialization is implemented by multiplying each bit by the value of -4/N0, where N0 is the noise
variance for each Signal to Noise Ratio (SNR) according to (4). For SNR from 0 to 7 the values of N0 are 1,
0.7943, 0.6310, 0.5012, 0.3981, 0.3162, 0.2512 and 0.1995 respectively. This process will be applied to 20
bits serially as shown in Figure 7. After this, the sign value will be computed for each bit beside the absolute
value in order to apply (7) to each bit as shown in Figure 8.
Figure 7. XSG block diagram of initialization Figure 8. XSG block diagram of alphaij block
3.3.2. Serial to parallel
Serial to parallel block has been used for converting the serial samples to parallel samples,
where each sample is with format WL is 18 bits and FL is 16 bits and 20 latches registers have been used to
mask the parallel 20 samples. This block has been implemented using shift registers. The block diagram of
serial to parallel in XSG is shown in Figure 9
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1454 - 1468
1460
Figure 9. Block diagram of serial to parallel in XSG
3.3.3. Log Decoder
The inputs to this block are symbols of 20 bits and the outputs will be symbols of 10 bits which
represent the information message after the decoding process as shown in Figure 10. The blocks in part A
represent the horizontal step and the blocks in part B represent the vertical step.
Figure 10. Block diagram of log decoder in XSG
3.3.3.1 Horizontal Step
In this step, the computations are made based on the number of 1s in every column of the ten rows
in the PCM. The extrinsic probabilities for each bit are computed by multiplying the sign value for each bit
by the summation of the probabilities of non-zero bits in each row. The computations will depend on
the number of non-zero columns in every row as shown in Figure 11.
8. Int J Elec & Comp Eng ISSN: 2088-8708
Design and implementation of log domain decoder (Mahmood Farhan Mosleh)
1461
Figure 11. Extrinsic probabilities for row1
3.3.3.2. Vertical Step
In this step, bit messages have been updated depending on the summation of the extrinsic
probabilities for each bit with the LLR without using the information from CN 𝑚′, also the decision is made
for each bit by combining the extrinsic probabilities for every bit and the LLR from the channel. The decision
for decoding each bit based on its value if it is positive or negative if the value is negative the bit is one
otherwise is zero as shown in Figure 12.
Figure 12. Decoding decision for column 11
3.3.4 Down Sample and Parallel to serial
The resulting symbols from the decoding process which represent the information part will then
enter to the down sample block which is implemented for decreasing the rate of the signal at the receiver.
The parameters for this block are sample rate is 20 and latency is one. After this stage, these bits will be
transformed from parallel data to serial data by using concat block and parallel to serial block as shown in
Figure 13.
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1454 - 1468
1462
Figure 13. XSG block diagram of down sample block with concat and parallel to serial
4. RESULTS AND ANALYSIS
The XSG simulation waveforms for each block of the system model is plotted using Matlab
program. Figure 14 shows XSG time waveforms of S/P block. The time representation of LDPC encoder and
the modulation block are depicted in Figures 15 and 16 respectively. Figures 17 shows the time waveform of
the output of the AWGN channel. Figure 18 shows XSG time waveforms of the initialization process.
Figure 19 shows XSG time waveforms of the decoding and down sample for the first bit. Figure 20 shows
XSG time waveform of the concat block output. The comparison between XSG time waveform of
the transmitter and receiver is shown in Figure 21. Table 1 illustrates the BER results comparison between
Matlab/simulation and XSG.
Figure 14. XSG time waveforms of the S/P
10. Int J Elec & Comp Eng ISSN: 2088-8708
Design and implementation of log domain decoder (Mahmood Farhan Mosleh)
1463
Figure 15. XSG time waveform of the LDPC encoder output
Figure 16. XSG time waveforms of the modulation block
11. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1454 - 1468
1464
Figure 17. XSG time waveform of the AWGN channel
Figure 18. XSG time waveforms of the initialization process
12. Int J Elec & Comp Eng ISSN: 2088-8708
Design and implementation of log domain decoder (Mahmood Farhan Mosleh)
1465
Figure 19. XSG time waveforms of the decoding and down sample for the first bit
Figure 20. XSG time waveform of the concat block output
13. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1454 - 1468
1466
Figure 21. Comparing the signal at the transmitter and at the receiver
Table 1. Comparing the BER in Matlab/simulation and BER in FPGA
SNR(dB) BER simulation BER FPGA
0 0.1429 0.1089
1 0.1000 0.09901
2 0.0786 0.06931
3 0.0286 0.0396
4 0.0429 0.0495
5 0 0.009901
6 0 0.009901
7 0 0
5. SYNTHESIS REPORTS
The Vhdl code can be generated by choosing the FPGA device of kintex7 (XC7K325T-2FFG900C)
and Vivado 2017.4 program. The device utilization summary of LDPC system is presented in Table 2.
The minimum period is 26.278ns and the maximum frequency is 38.055MHz. In addition, the proposed
LDPC system based on log domain decoder has been tested and downloaded successfully on the FPGA
device kintex7 (XC7K325T-2FFG900C) as shown in Figure 22.
Table 2. Device utilization summary of log decoder system
Resource Utilization Available Utilization Percentage
LUT 44120 203800 21.65
LUTRAM 253 64000 0.40
FF 3571 407600 0.92
BRAM 4.50 445 1.01
DSP 193 840 22.98
Number of bonded IOBs 4 500 0.80
BUFG 1 32 3.13
14. Int J Elec & Comp Eng ISSN: 2088-8708
Design and implementation of log domain decoder (Mahmood Farhan Mosleh)
1467
Figure 22. Test the system using kintex7 (XC7K325T-2FFG900C)
6. CONCLUSION
A complete LDPC decoder system based log domain algorithm has been designed and implemented
successfully by using XSG. The software tools Xilinx Vivado 2017.4 and kintex7 (XC7K325T-2FFG900C)
have been used in the implementation. The obtained results from the system proved that the system works
correctly with results very close to theoretical results. The results confirm that such decoder can be applied
with a new communication system according to its low complexity and low BER. FPGA results for BER
were 0.1089, 0.09901, 0.06931, 0.0396, 0.0495, 0.009901, 0.009901 and zero for SNR values from 0, 1,
2,….,7 respectively. The worst BER is achieved when the SNR value is zero while the best BER is achieved
when the SNR value is 7. This is mean increasing the SNR value will lead to significant improvement in
BER due to the increase in signal power over the noise power. The results confirm that the theoretical results
based on pure simulation are very close to those results of FPGA implementation as indicated in Table 1.
That means the proposed system can be implemented successfully in modern communications.
REFERENCES
[1] R. G. Gallager, "Low Density Parity Check Codes", Number 21 in Research monograph series. MI Press,
Cambridge, Mass, 1963.
[2] M. K. Roberts and R. Jayabalan, "A Modified Normalized Min – Sum Decoding Algorithm for Irregular LDPC
Codes", International Journal of Engineering and Technology, Vol. 5, No. 6, Jan 2014.
[3] N.P. Bhavsar and B. Vala, "Design of Hard and Soft Decision Decoding Algorithms of LDPC", International
Journal of Computer Applications, Vol. 90, No.16, Mar 2014.
[4] M. F. Mosleh, "Evaluation of Low Density Parity Check Codes Over Various Channel Types", Engineering and
Technology Journal, Vol. 29, Issue 5, pp. 961-971, 2011.
[5] Myung Hun Lee, Jae Hee Han and Myung Hoon Sunwoo, "New Simplified Sum-Product Algorithm For Low
Complexity LDPC Decoding", 2008 IEEE Workshop on Signal Processing Systems, Washington, DC,
pp. 61-66, 2008.
[6] O. O. Khalifa, et al., "Performance Evaluation of Low Density Parity Check Codes", Journal of Electrical and
Electronics Engineering, Vol.2, No.6, Jun 2008.
[7] T. J. Richardson and R. L. Urbanke, "Efficient Encoding of low-density parity-check codes", in IEEE Transactions
on Information Theory, Vol. 47, No. 2, pp. 638-656, Feb 2001.
[8] R. Zarubica, S. G. Wilson and E. Hall, "Multi-Gbps FPGA-Based Low Density Parity Check (LDPC) Decoder
Design", IEEE GLOBECOM 2007 - IEEE Global Telecommunications Conference, Washington, DC,
pp. 548-552, 2007.
[9] S. M. E. Hosseini, Kheong Sann Chan and Wang Ling Goh, "A reconfigurable FPGA implementation of an LDPC
decoder for unstructured Codes", 2008 2nd International Conference on Signals, Circuits and Systems, Monastir,
pp. 1-6, 2008.
[10] V. A. Chandrasetty and S. M. Aziz, " FPGA Implementation of High Performance LDPC Decoder using Modified
2-bit Min-Sum Algorithm", 2010 Second International Conference on Computer Research and Development, 2010.
[11] S. S. Khati, P. Bisht and S.C. Pujari, "Improved Decoder Design for LDPC Codes based on selective node
processing", 2012 World Congress on Information and Communication Technologies, 2012.
[12] A. K. Panigrahi and A. K. Panda, " Design of LDPC Decoder Using FPGA: Review of Flexibility", International
Journal of Engineering and Science, Vol. 1, pp. 36-41, Nov 2012.
[13] J.C. Porcello, "Designing and Implementing Low Density Parity Check (LDPC) Decoders using FPGAs,"
Aerospace Conference, IEEE, 2014.
15. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1454 - 1468
1468
[14] T. Nguyen-Ly et al., "FPGA Design of High Throughput LDPC Decoder based on Imprecise Offset Min-Sum
Decoding", IEEE 13th
International New Circuits and Systems Conference, 2015.
[15] D. Zou and I. B. Djordjevic, "FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical
communication systems", Optics Express, Vol. 24, No. 18, pp. 21159–21166, 2016.
[16] A. Boudaoud, M. El Haroussi, E. Abdelmounim, "VHDL Design and FPGA Implementation of LDPC Decoder for
High Data Rate", International Journal of Advanced Computer Science and Applications, Vol. 8, No. 4, 2017.
[17] P.V Sreemohan and N. Sebastian, "FPGA Implementation of Min-Sum Algorithm for LDPC Decoder",
International Conference on Trends in Electronics and Informatics, 2017.
[18] B. S. Reddy and V. S. R. Rao, "FPGA Implementation of LDPC Encoder and Decoder using Bit Flipping
Algorithm ", International Journal of Science and Research, Vol. 6, No. 9, pp. 1683–1690, 2017.
[19] X. Sun and I. B. Djordjevic, "FPGA implementation of rate-adaptive spatially coupled LDPC codes suitable for
optical communications", Optics Express, Vol. 27, No. 3, 4 Feb 2019.
[20] Mahmood F. Mosleh, Mais F. Abid and Mohammed Al-Sadoon, "Implementation of Turbo Code Based Xilinx
System Generator", 9th
International EAI Conference, Broadnets Faro, Portugal, 2018.
[21] Dong-U Lee," Hardware Designs for Function Evaluation and LDPC Coding", Ph.D. Thesis, Imperial College
London, 2004.
[22] Asisa K. Panigrahi and Ajit K. Panda, "Design of LDPC Decoder Using FPGA: Review of Flexibility",
International Journal of Engineering and Science, Vol. 1, Issue 7, pp. 36-41, 2012.
[23] S. J. Johnson, "Introducing Low Density Parity Check Codes", MSc. Thesis, University of Newcastle Australia,
2004.
[24] M. Kiaee, H. Gharaee and N. Mohammadzadeh,"LDPC Decoder Implementation Using FPGA", 8th
International
Symposium on Telecommunications, 2016.
[25] M. Kolayli, "Comparison of Decoding Algorithms for Low-Density Parity-Check Codes", MSc. Thesis, Middle
East Technical University, 2006.
BIOGRAPHIES OF AUTHORS
Mahmood F. Mosleh. He received his B.Sc, M.Sc and Ph.D degrees in 1995, 2000 and 2008
respectively. He has been a Prof. of Communication Eng. at the MTU, Iraq. He has more than 35
publications in national and international conferences and journals. Prof. Mahmood is a member of
IEC and he is the head of Iraqi committee of IEC.
Fadhil S. Hasan was born in Baghdad, Iraq in 1978. He received his B.Sc. degree in Electrical
Engineering in 2000 and his M.Sc. degree in Electronics and Communication Engineering in
2003, both from the Mustansiriyah University, Iraq. He received Ph.D. degree in 2013 in
Electronics and Communication Engineering from the Basrah University, Iraq. In 2005, he joined
the faculty of Engineering at the Mustansiriyah University in Baghdad. His recent research
activities cover wireless communication systems, multicarrier system, wavelet based OFDM,
MIMO system, speech signal processing, chaotic modulation, FPGA and Xilinx system generator
based communication system. Now he has been an assistant Prof. at AL Mustansiriyah University,
Iraq.
Ruaa Majeed Azeez was born in Iraq on March 11, 1989. She received her B.Sc. in Computer
Engineering Techniques, Middle Technical University, Baghdad Iraq, in 2011. She worked as a
teaching assistant in AL Furat AL Awsat Technical University since 2013. She is currently
pursuing her MSc. program in Computer Technical Engineering with an emphasis in
communication systems.