This document proposes a method to estimate the bitlength of transformed and quantized residue coefficients and syntax elements for mode decision in H.264 baseline encoding. It aims to reduce the computational complexity of calculating rate-distortion cost (RD-Cost) by estimating bitlengths without fully encoding bitstreams. The key aspects are:
1) It classifies residue coefficients and estimates bitlength for coefficient types like Luma_4x4, Luma DC, Luma AC, Chroma DC, and Chroma AC based on coding tables and context like neighboring coefficients.
2) It estimates bitlengths for syntax elements like macroblock type, prediction modes, and motion vectors based on Exp-Golomb coding tables
Performance analysis and implementation for nonbinary quasi cyclic ldpc decod...ijwmn
Non-binary low-density parity check (NB-LDPC) codes are an extension of binary LDPC codes with
significantly better performance. Although various kinds of low-complexity iterative decoding algorithms
have been proposed, there is a big challenge for VLSI implementation of NBLDPC decoders due to its high
complexity and long latency. In this brief, highly efficient check node processing scheme, which the
processing delay greatly reduced, including Min-Max decoding algorithm and check node unit are
proposed. Compare with previous works, less than 52% could be reduced for the latency of check node
unit. In addition, the efficiency of the presented techniques is design to demonstrate for the (620, 310) NBQC-
LDPC decoder.
A new efficient fpga design of residue to-binary converterVLSICS Design
In this paper, we introduce a new 6n bit Dynamic Range Moduli set { 22n, 22n + 1, 22n – 1} and then present
its associated novel reverse converters. First, we simplify the Chinese Remainder Theorem in order to
obtain an efficient reverse converter which is completely memory less and adder based. Next, we present a
low complexity implementation that does not require the explicit use of modulo operation in the conversion
process and we demonstrate that theoretically speaking it outperforms state of the art equivalent reverse
converters. We also implemented the proposed converter and the best equivalent state of the art reverse
converters on Xilinx Spartan 6 FPGA. The experimental results confirmed the theoretical evaluation. The
FPGA synthesis results indicate that, on the average, our proposal is about 52.35% and 43.94% better in
terms of conversion time and hardware resources respectively.
Analysis of LDPC Codes under Wi-Max IEEE 802.16eIJERD Editor
The LDPC codes have been shown to be by-far the best coding scheme capable for transmitting message over noisy channel. The main aim of this paper is to study the behaviour of LDPC codes on under IEEE 802.16e guidelines. The rate- ½ LDPC codes have been implemented on AWGN channel and the result shows that they can be used on such channels with low BER performance. The BER can be further minimized by increasing the block length.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This document describes a proposed high-speed linear feedback shift register (LFSR) design for a Bose-Chaudhuri-Hocquengham (BCH) encoder through the application of sample period reduction technique. Specifically:
1. The LFSR is used to generate parity bits that are concatenated with message bits to form a codeword for error detection and correction in the BCH encoder.
2. To increase throughput and speed, the LFSR is unfolded using parallel processing techniques like unfolding, which increases the number of message bits processed per clock cycle.
3. An unfolding factor is selected based on analyzing criteria like computational time and iteration bounds, to reduce the sampling period and thereby decrease
FPGA Implementation of Mixed Radix CORDIC FFTIJSRD
In this Paper, the architecture and FPGA implementation of a Coordinate Rotation Digital Computer (CORDIC) pipeline Fast Fourier Transform (FFT) processor is presented. Fast Fourier Transforms (FFT) is highly efficient algorithm which uses Divide and Conquer approach for speedy calculation of Discrete Fourier transform (DFT) to obtain the frequency spectrum. CORDIC algorithm which is hardware efficient and avoids the use of conventional multiplication and accumulation (MAC) units but evaluates the trigonometric functions by the rotation of a complex vector by means of only add and shift operations. We have developed Fixed point FFT processors using VHDL language for implementation on Field Programmable Gate Array. A Mixed Radix 8 point DIF FFT/IFFT architecture with CORDIC Twiddle factor generation unit with use of pipeline implementation FFT processor has been developed using Xilinx XC3S500E Spartan-3E FPGA and simulated with maximum frequency of 157.359 MHz for 16 bit length 8 point FFT. Results show that the processor uses less number of LUTs and achieves Maximum Frequency.
This document discusses low-density parity-check (LDPC) codes and their decoding using belief propagation on factor graphs. It introduces LDPC codes and their representation by sparse parity-check matrices and Tanner graphs. It describes irregular and regular LDPC codes, degree distributions, code ensembles, and decoding using belief propagation on factor graphs and the sum-product algorithm. Examples of decoding a LDPC code over a binary-input additive white Gaussian noise channel are also presented.
Performance analysis and implementation for nonbinary quasi cyclic ldpc decod...ijwmn
Non-binary low-density parity check (NB-LDPC) codes are an extension of binary LDPC codes with
significantly better performance. Although various kinds of low-complexity iterative decoding algorithms
have been proposed, there is a big challenge for VLSI implementation of NBLDPC decoders due to its high
complexity and long latency. In this brief, highly efficient check node processing scheme, which the
processing delay greatly reduced, including Min-Max decoding algorithm and check node unit are
proposed. Compare with previous works, less than 52% could be reduced for the latency of check node
unit. In addition, the efficiency of the presented techniques is design to demonstrate for the (620, 310) NBQC-
LDPC decoder.
A new efficient fpga design of residue to-binary converterVLSICS Design
In this paper, we introduce a new 6n bit Dynamic Range Moduli set { 22n, 22n + 1, 22n – 1} and then present
its associated novel reverse converters. First, we simplify the Chinese Remainder Theorem in order to
obtain an efficient reverse converter which is completely memory less and adder based. Next, we present a
low complexity implementation that does not require the explicit use of modulo operation in the conversion
process and we demonstrate that theoretically speaking it outperforms state of the art equivalent reverse
converters. We also implemented the proposed converter and the best equivalent state of the art reverse
converters on Xilinx Spartan 6 FPGA. The experimental results confirmed the theoretical evaluation. The
FPGA synthesis results indicate that, on the average, our proposal is about 52.35% and 43.94% better in
terms of conversion time and hardware resources respectively.
Analysis of LDPC Codes under Wi-Max IEEE 802.16eIJERD Editor
The LDPC codes have been shown to be by-far the best coding scheme capable for transmitting message over noisy channel. The main aim of this paper is to study the behaviour of LDPC codes on under IEEE 802.16e guidelines. The rate- ½ LDPC codes have been implemented on AWGN channel and the result shows that they can be used on such channels with low BER performance. The BER can be further minimized by increasing the block length.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This document describes a proposed high-speed linear feedback shift register (LFSR) design for a Bose-Chaudhuri-Hocquengham (BCH) encoder through the application of sample period reduction technique. Specifically:
1. The LFSR is used to generate parity bits that are concatenated with message bits to form a codeword for error detection and correction in the BCH encoder.
2. To increase throughput and speed, the LFSR is unfolded using parallel processing techniques like unfolding, which increases the number of message bits processed per clock cycle.
3. An unfolding factor is selected based on analyzing criteria like computational time and iteration bounds, to reduce the sampling period and thereby decrease
FPGA Implementation of Mixed Radix CORDIC FFTIJSRD
In this Paper, the architecture and FPGA implementation of a Coordinate Rotation Digital Computer (CORDIC) pipeline Fast Fourier Transform (FFT) processor is presented. Fast Fourier Transforms (FFT) is highly efficient algorithm which uses Divide and Conquer approach for speedy calculation of Discrete Fourier transform (DFT) to obtain the frequency spectrum. CORDIC algorithm which is hardware efficient and avoids the use of conventional multiplication and accumulation (MAC) units but evaluates the trigonometric functions by the rotation of a complex vector by means of only add and shift operations. We have developed Fixed point FFT processors using VHDL language for implementation on Field Programmable Gate Array. A Mixed Radix 8 point DIF FFT/IFFT architecture with CORDIC Twiddle factor generation unit with use of pipeline implementation FFT processor has been developed using Xilinx XC3S500E Spartan-3E FPGA and simulated with maximum frequency of 157.359 MHz for 16 bit length 8 point FFT. Results show that the processor uses less number of LUTs and achieves Maximum Frequency.
This document discusses low-density parity-check (LDPC) codes and their decoding using belief propagation on factor graphs. It introduces LDPC codes and their representation by sparse parity-check matrices and Tanner graphs. It describes irregular and regular LDPC codes, degree distributions, code ensembles, and decoding using belief propagation on factor graphs and the sum-product algorithm. Examples of decoding a LDPC code over a binary-input additive white Gaussian noise channel are also presented.
A Low Power VITERBI Decoder Design With Minimum Transition Hybrid Register Ex...VLSICS Design
This work proposes the low power implementation of Viterbi Decoder. Majority of viterbi decoder designs in the past use simple Register Exchange or Traceback method to achieve very high speed and low power decoding respectively, but it suffers from both complex routing and high switching activity. Here simplification is made in survivor memory unit by storing only m-1 bits to identify previous state in the survivor path, and by assigning m-1 registers to decision vectors. This approach eliminates unnecessary shift operations. Also for storing the decoded data only half memory is required than register exchange method. In this paper Hybrid approach that combines both Traceback and Register Exchange schemes has been applied to the viterbi decoder design. By using distance properties of encoder we further modified to minimum transition hybrid register exchange method. It leads to lower dynamic power consumption because of lower switching activity. Dynamic power estimation obtained through gate level simulation indicates that the proposed design reduces the power dissipation of a conventional viterbi decoder design by 30%.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
IRJET- Review Paper on Study of Various Interleavers and their SignificanceIRJET Journal
This document reviews various interleavers and their significance in digital communication systems. It discusses how interleavers can be used in turbo encoders and decoders to improve error correction capabilities without reducing bandwidth. The document summarizes different types of interleavers including random, QPP, helical, odd-even, and matrix interleavers. It also discusses turbo encoding and decoding processes and how convolutional codes differ from block codes. Key performance metrics like bit error rate and bit error rate curves are analyzed to evaluate and compare interleaver quality.
LDPC Encoding and Hamming Encoding using MATLAB.
An LDPC code is a linear block code characterised by a very sparse parity-check matrix. This means that the parity check matrix has a very low concentration of 1’s in it, hence the name is “low-density parity-check” code. The sparseness of LDPC codes is what as it can lead to excellent performance in terms of bit error rates.
Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...IJERA Editor
The error correction codes have a wide range of applications in digital communication (satellite, wireless) and digital data storage. This paper presents a comparative study of performance between RS (255, 239) and RS (255.233) used respectively in the Digital Video Broadcasting – Terrestrial (DVB-T) and National Aeronautics and Space Administration (NASA). The performances were evaluated by applying modulation scheme in additive white Gaussian noise (AWGN) channel. Performances of modulation with RS codes are evaluated in bit error rate (BER) and signal energy -to- noise power density ratio (Eb / No). The analysis is studied with the help of MATLAB simulator to analyze a communication link with AWGN Channel, and different modulations.
The document describes an LDPC (Low Density Parity Check) codes project done by a group of students. Key points:
- The group generated a sparse parity check matrix H for LDPC encoding that avoids cycles of length 4.
- They implemented LDPC encoding in MATLAB and Verilog, calculating parity bits from the input message bits using the formula p = (B^-1) * (Au^T).
- The Verilog implementation was tested on a Nexys-2 FPGA board, with input bits entered via switches and parity bits output to LEDs.
- The project was completed over 8 weeks. While it demonstrated LDPC encoding, the group noted the encoder has high delay and
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses Low Density Parity Check (LDPC) codes. It describes LDPC codes as having sparse parity check matrices, which allows for large minimum distances and improved error correction performance. It explains the differences between regular and irregular LDPC codes, and discusses factors like minimum distance, cycle length, linear independence, and encoding and decoding of LDPC codes. It provides examples of parity check matrices and generator matrices. It also provides an overview of an LDPC system and the encoding process.
This document discusses low-density parity-check (LDPC) codes. It begins with an overview of LDPC codes, noting they were originally invented in the 1960s but gained renewed interest after turbo codes. It then covers LDPC code performance and construction, including generator and parity check matrices. Various representations of LDPC codes are examined, such as matrix and graphical representations using Tanner graphs. Applications of LDPC codes include wireless, wired, and optical communications. In conclusions, turbo codes achieved theoretical limits with a small gap and led to new codes like LDPC codes, which provide high-speed and high-throughput performance close to the Shannon limit.
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...Journal For Research
Low density parity check code is a linear block code. This code approaches the Shannon’s limit and having low decoding complexity. We have taken LDPC (Low Density Parity Check) code with ½ code rate as an error correcting code in digital video stream and studied the performance of LDPC code with BPSK modulation in AWGN (Additive White Gaussian Noise) channel with sum product algorithm and bit flipping algorithm. Finally the plot between bit error rates of the code with respect to SNR has been considered the output performance parameter of proposed methodology. BER are considered for different number of frames and different number of iterations. The performance of the sum product algorithm and bit flip algorithm are also com-pared. All simulation work has been implemented in MATLAB.
Error correcting coding has become one essential part in nearly all the modern data transmission and
storage systems. Low density parity check (LDPC) codes are a class of linear block code has the superior
performance closer to the Shannon’s limit. In this paper two error correcting codes from the family of
LDPC codes specifically Euclidean Geometry Low Density Parity Check (EG-LDPC) codes and Nonbinary
low density parity check (NB-LDPC) codes are compared in terms of power consumption, number of
iterations and other parameters. For better performance of EG-LDPC codes, Maximum Likelihood (ML)
Algorithm was proposed. NB-LDPC codes can provide better error correcting performance with an
average of 10 to 30 iterations but has high decoding complexity which can be improve by EG-LDPC codes
with ML algorithm having only three iterations for detecting and correcting errors. One step majority logic
decodable (MLD) codes is a subclass of EG-LDPC codes are used to avoid high decoding complexity. The
power Consumed by NB-LDPC codes is 2.729W whereas the power consumed by EG-LDPC codes with ML
algorithm is 1.148W.
Iaetsd finger print recognition by cordic algorithm and pipelined fftIaetsd Iaetsd
This document proposes an efficient CORDIC pipelined FFT algorithm for fingerprint recognition on FPGAs. The CORDIC algorithm uses only shift and add operations, making it suitable for replacing multipliers in the butterfly operations of an FFT. This reduces computational complexity. The proposed system takes a fingerprint, processes it with the CORDIC pipelined FFT, extracts features which are stored and then matched against a test fingerprint for recognition. The algorithm aims to provide an efficient hardware implementation of FFT and fingerprint recognition using minimal computations.
The document discusses low density parity check (LDPC) codes. It begins with a brief history of LDPC codes, invented by Gallager in 1960 but rediscovered in the 1990s. It then discusses linear block codes and how they can be represented by generator and parity check matrices. The key properties of LDPC codes are described, including their sparse parity check matrix and regular or irregular structure. Decoding of LDPC codes using tanner graphs and hard decision bit flipping algorithms is explained. Finally, some applications of LDPC codes in communication systems and data storage are provided.
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...ijcseit
This paper presents the bit error rate (BER) performance of the low density parity check (LDPC) based
space-time trellis coded 2×2 multiple-input multiple-output orthogonal frequency-division multiplexing
(STTC-MIMO-OFDM) system on text message transmission. The system under investigation incorporates
1/2-rated LDPC encoding scheme under various digital modulations (BPSK, QPSK and QAM) over an
additative white gaussian noise (AWGN) and other fading (Raleigh and Rician) channels for two transmit
and two receive antennas. At the receiving section of the simulated system, Minimum Mean-Square-Error
(MMSE) channel equalization technique has been implemented to extract transmitted symbols without
enhancing noise power level. The effectiveness of the proposed system is analyzed in terms of BER with
signal-to-noise ratio (SNR). It is observable from the Matlab based simulation study that the proposed
system outperforms with BPSK as compared to other digital modulation schemes at relatively low SNRs
under AWGN, Rayleigh and Rician fading channels. The transmitted text message is found to have
retrieved effectively at the receiver under implementation of iterative sum-product LDPC decoding
algorithm. It has also been anticipated that the performance of the LDPC-based STTC-MIMO-OFDM
system degrades with the increase of noise power.
An Efficient Decoding Algorithm for Concatenated Turbo-Crc CodesIJTET Journal
In this paper, a hybrid turbo decoding algorithm is used, in which the outer code, Cyclic Redundancy Check code is
not used for detection of errors as usual but for error correction and improvement. This algorithm effectively combines the iterative
decoding algorithm with Rate-Compatible Insertion Convolution Turbo Decoding, where the CRC code and the turbo code are
regarded as an integrated whole in the Decoding process. Altogether we propose an effective error detecting method based on
normalized Euclidean distance to compensate for the loss of error detection capability which should have been provided by CRC
code. Simulation results show that with the proposed approach, 0.5-2dB performance gain can be achieved for the code blocks
with short information length
The document proposes new hardware architectures for generating binary Golay code (23,12) and extended Golay code (24,12). The architectures aim to overcome disadvantages of existing approaches like not supporting messages with leading zero bits. The binary Golay code architecture uses a priority encoder and conditional shifting of the remainder to efficiently perform the polynomial division. The extended Golay code architecture appends a parity bit by measuring the weight of the binary Golay code and selecting the parity based on the least significant bit. Simulation results show the proposed architectures achieve high-speed, low-latency and low power generation of the Golay codes.
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier SystemsIJMER
In this paper, we concern with designing and implementing a Convolutional encoder and
Adaptive Viterbi Decoder (AVD) which are the essential blocks in digital communication system using
FPGA technology. Convolutional coding is a coding scheme used in communication systems for error
correction employed in applications like deep space communications and wireless communications. It
provides an alternative approach to block codes for transmission over a noisy channel. The block
codes can be applied only for the blocks of data where as the Convolutional coding has an advantage
that it can be applied to both continuous data stream and blocks of data. The Viterbi decoder with
PNPH (Permutation Network based Path History) management unit which is a special path
management unit for faster decoding speed with less routing area. The proposed architecture can be
realized by an Adaptive Viterbi Decoder having constraint length, K of 3 and a code rate (k/n) of 1/2
using Verilog HDL. Simulation is done using Xilinx ISE 12.4i design software and it is targeted into
Xilinx Virtex-5, XC5VLX110T FPGA
LDPC (Low Density Parity Check) codes were first reported in 1962 by Gallager as a method of channel coding to defeat noise using low density parity check matrices. LDPC codes were rediscovered in 1996 by MacKay and Neal, who showed they could achieve performance close to Turbo Codes but with lower implementation costs. LDPC codes work by encoding data through matrix operations and decoding by finding the most probable vector that satisfies the parity check equation, with many possible decoding implementations. They have applications in wireless, wired, and optical communications where the code must be designed to balance highest coding gain with ease of implementation based on the throughput requirements and modulation scheme used.
Low power ldpc decoder implementation using layer decodingajithc0003
This document proposes a low-power LDPC decoder implementation using layered decoding. It discusses how LDPC codes can be used for reliable data transmission and are finding increasing use. It describes layered decoding as an efficient approach that can decrease power consumption. The proposed method is vectored layer decoding, which overcomes limitations of traditional layered decoding. It involves encoding data using an LDPC generator matrix derived from the parity check matrix. Simulations were conducted to generate the generator matrix and encode data. The goal is to efficiently implement a low-power LDPC decoder using this vectored layer decoding approach.
High Speed Decoding of Non-Binary Irregular LDPC Codes Using GPUs (Paper)Enrique Monzo Solves
Implementation of a high speed decoding of non-binary irregular LDPC codes using CUDA GPUs.
Moritz Beermann, Enrique Monzó, Laurent Schmalen, Peter Vary
IEEE SiPS, Oct. 2013, Taipei, Taiwan
Instruction level parallelism using ppm branch predictionIAEME Publication
This document summarizes an approach to instruction level parallelism using prediction by partial matching (PPM) branch prediction. It proposes a hybrid PPM-based branch predictor that uses both local and global branch histories. The two predictors are combined using a neural network. Key aspects of the implementation include:
1. Using local and global history PPM predictors and combining their predictions with a neural network.
2. Enhancements to the basic PPM approach like program counter tagging, efficient history encoding using run-length encoding, tracking pattern bias, and dynamic pattern length selection.
3. Details of the global history PPM predictor including the use of tables and linked lists to store patterns of different lengths and handle collisions
This document summarizes a study that analyzed Indian scientific literature in veterinary sciences from 1999-2011. Some key findings include:
- A total of 5,468 publications were analyzed, with the majority (99.09%) being journal articles.
- Research output grew steadily from 1999-2008 but declined in 2009-2010.
- The most common authorship patterns were papers with 3 or 4 authors, indicating collaborative work is prevalent.
- The author with the most publications was Kumar, A. from Punjab Agricultural University with 94 papers.
A Low Power VITERBI Decoder Design With Minimum Transition Hybrid Register Ex...VLSICS Design
This work proposes the low power implementation of Viterbi Decoder. Majority of viterbi decoder designs in the past use simple Register Exchange or Traceback method to achieve very high speed and low power decoding respectively, but it suffers from both complex routing and high switching activity. Here simplification is made in survivor memory unit by storing only m-1 bits to identify previous state in the survivor path, and by assigning m-1 registers to decision vectors. This approach eliminates unnecessary shift operations. Also for storing the decoded data only half memory is required than register exchange method. In this paper Hybrid approach that combines both Traceback and Register Exchange schemes has been applied to the viterbi decoder design. By using distance properties of encoder we further modified to minimum transition hybrid register exchange method. It leads to lower dynamic power consumption because of lower switching activity. Dynamic power estimation obtained through gate level simulation indicates that the proposed design reduces the power dissipation of a conventional viterbi decoder design by 30%.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
IRJET- Review Paper on Study of Various Interleavers and their SignificanceIRJET Journal
This document reviews various interleavers and their significance in digital communication systems. It discusses how interleavers can be used in turbo encoders and decoders to improve error correction capabilities without reducing bandwidth. The document summarizes different types of interleavers including random, QPP, helical, odd-even, and matrix interleavers. It also discusses turbo encoding and decoding processes and how convolutional codes differ from block codes. Key performance metrics like bit error rate and bit error rate curves are analyzed to evaluate and compare interleaver quality.
LDPC Encoding and Hamming Encoding using MATLAB.
An LDPC code is a linear block code characterised by a very sparse parity-check matrix. This means that the parity check matrix has a very low concentration of 1’s in it, hence the name is “low-density parity-check” code. The sparseness of LDPC codes is what as it can lead to excellent performance in terms of bit error rates.
Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...IJERA Editor
The error correction codes have a wide range of applications in digital communication (satellite, wireless) and digital data storage. This paper presents a comparative study of performance between RS (255, 239) and RS (255.233) used respectively in the Digital Video Broadcasting – Terrestrial (DVB-T) and National Aeronautics and Space Administration (NASA). The performances were evaluated by applying modulation scheme in additive white Gaussian noise (AWGN) channel. Performances of modulation with RS codes are evaluated in bit error rate (BER) and signal energy -to- noise power density ratio (Eb / No). The analysis is studied with the help of MATLAB simulator to analyze a communication link with AWGN Channel, and different modulations.
The document describes an LDPC (Low Density Parity Check) codes project done by a group of students. Key points:
- The group generated a sparse parity check matrix H for LDPC encoding that avoids cycles of length 4.
- They implemented LDPC encoding in MATLAB and Verilog, calculating parity bits from the input message bits using the formula p = (B^-1) * (Au^T).
- The Verilog implementation was tested on a Nexys-2 FPGA board, with input bits entered via switches and parity bits output to LEDs.
- The project was completed over 8 weeks. While it demonstrated LDPC encoding, the group noted the encoder has high delay and
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses Low Density Parity Check (LDPC) codes. It describes LDPC codes as having sparse parity check matrices, which allows for large minimum distances and improved error correction performance. It explains the differences between regular and irregular LDPC codes, and discusses factors like minimum distance, cycle length, linear independence, and encoding and decoding of LDPC codes. It provides examples of parity check matrices and generator matrices. It also provides an overview of an LDPC system and the encoding process.
This document discusses low-density parity-check (LDPC) codes. It begins with an overview of LDPC codes, noting they were originally invented in the 1960s but gained renewed interest after turbo codes. It then covers LDPC code performance and construction, including generator and parity check matrices. Various representations of LDPC codes are examined, such as matrix and graphical representations using Tanner graphs. Applications of LDPC codes include wireless, wired, and optical communications. In conclusions, turbo codes achieved theoretical limits with a small gap and led to new codes like LDPC codes, which provide high-speed and high-throughput performance close to the Shannon limit.
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...Journal For Research
Low density parity check code is a linear block code. This code approaches the Shannon’s limit and having low decoding complexity. We have taken LDPC (Low Density Parity Check) code with ½ code rate as an error correcting code in digital video stream and studied the performance of LDPC code with BPSK modulation in AWGN (Additive White Gaussian Noise) channel with sum product algorithm and bit flipping algorithm. Finally the plot between bit error rates of the code with respect to SNR has been considered the output performance parameter of proposed methodology. BER are considered for different number of frames and different number of iterations. The performance of the sum product algorithm and bit flip algorithm are also com-pared. All simulation work has been implemented in MATLAB.
Error correcting coding has become one essential part in nearly all the modern data transmission and
storage systems. Low density parity check (LDPC) codes are a class of linear block code has the superior
performance closer to the Shannon’s limit. In this paper two error correcting codes from the family of
LDPC codes specifically Euclidean Geometry Low Density Parity Check (EG-LDPC) codes and Nonbinary
low density parity check (NB-LDPC) codes are compared in terms of power consumption, number of
iterations and other parameters. For better performance of EG-LDPC codes, Maximum Likelihood (ML)
Algorithm was proposed. NB-LDPC codes can provide better error correcting performance with an
average of 10 to 30 iterations but has high decoding complexity which can be improve by EG-LDPC codes
with ML algorithm having only three iterations for detecting and correcting errors. One step majority logic
decodable (MLD) codes is a subclass of EG-LDPC codes are used to avoid high decoding complexity. The
power Consumed by NB-LDPC codes is 2.729W whereas the power consumed by EG-LDPC codes with ML
algorithm is 1.148W.
Iaetsd finger print recognition by cordic algorithm and pipelined fftIaetsd Iaetsd
This document proposes an efficient CORDIC pipelined FFT algorithm for fingerprint recognition on FPGAs. The CORDIC algorithm uses only shift and add operations, making it suitable for replacing multipliers in the butterfly operations of an FFT. This reduces computational complexity. The proposed system takes a fingerprint, processes it with the CORDIC pipelined FFT, extracts features which are stored and then matched against a test fingerprint for recognition. The algorithm aims to provide an efficient hardware implementation of FFT and fingerprint recognition using minimal computations.
The document discusses low density parity check (LDPC) codes. It begins with a brief history of LDPC codes, invented by Gallager in 1960 but rediscovered in the 1990s. It then discusses linear block codes and how they can be represented by generator and parity check matrices. The key properties of LDPC codes are described, including their sparse parity check matrix and regular or irregular structure. Decoding of LDPC codes using tanner graphs and hard decision bit flipping algorithms is explained. Finally, some applications of LDPC codes in communication systems and data storage are provided.
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...ijcseit
This paper presents the bit error rate (BER) performance of the low density parity check (LDPC) based
space-time trellis coded 2×2 multiple-input multiple-output orthogonal frequency-division multiplexing
(STTC-MIMO-OFDM) system on text message transmission. The system under investigation incorporates
1/2-rated LDPC encoding scheme under various digital modulations (BPSK, QPSK and QAM) over an
additative white gaussian noise (AWGN) and other fading (Raleigh and Rician) channels for two transmit
and two receive antennas. At the receiving section of the simulated system, Minimum Mean-Square-Error
(MMSE) channel equalization technique has been implemented to extract transmitted symbols without
enhancing noise power level. The effectiveness of the proposed system is analyzed in terms of BER with
signal-to-noise ratio (SNR). It is observable from the Matlab based simulation study that the proposed
system outperforms with BPSK as compared to other digital modulation schemes at relatively low SNRs
under AWGN, Rayleigh and Rician fading channels. The transmitted text message is found to have
retrieved effectively at the receiver under implementation of iterative sum-product LDPC decoding
algorithm. It has also been anticipated that the performance of the LDPC-based STTC-MIMO-OFDM
system degrades with the increase of noise power.
An Efficient Decoding Algorithm for Concatenated Turbo-Crc CodesIJTET Journal
In this paper, a hybrid turbo decoding algorithm is used, in which the outer code, Cyclic Redundancy Check code is
not used for detection of errors as usual but for error correction and improvement. This algorithm effectively combines the iterative
decoding algorithm with Rate-Compatible Insertion Convolution Turbo Decoding, where the CRC code and the turbo code are
regarded as an integrated whole in the Decoding process. Altogether we propose an effective error detecting method based on
normalized Euclidean distance to compensate for the loss of error detection capability which should have been provided by CRC
code. Simulation results show that with the proposed approach, 0.5-2dB performance gain can be achieved for the code blocks
with short information length
The document proposes new hardware architectures for generating binary Golay code (23,12) and extended Golay code (24,12). The architectures aim to overcome disadvantages of existing approaches like not supporting messages with leading zero bits. The binary Golay code architecture uses a priority encoder and conditional shifting of the remainder to efficiently perform the polynomial division. The extended Golay code architecture appends a parity bit by measuring the weight of the binary Golay code and selecting the parity based on the least significant bit. Simulation results show the proposed architectures achieve high-speed, low-latency and low power generation of the Golay codes.
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier SystemsIJMER
In this paper, we concern with designing and implementing a Convolutional encoder and
Adaptive Viterbi Decoder (AVD) which are the essential blocks in digital communication system using
FPGA technology. Convolutional coding is a coding scheme used in communication systems for error
correction employed in applications like deep space communications and wireless communications. It
provides an alternative approach to block codes for transmission over a noisy channel. The block
codes can be applied only for the blocks of data where as the Convolutional coding has an advantage
that it can be applied to both continuous data stream and blocks of data. The Viterbi decoder with
PNPH (Permutation Network based Path History) management unit which is a special path
management unit for faster decoding speed with less routing area. The proposed architecture can be
realized by an Adaptive Viterbi Decoder having constraint length, K of 3 and a code rate (k/n) of 1/2
using Verilog HDL. Simulation is done using Xilinx ISE 12.4i design software and it is targeted into
Xilinx Virtex-5, XC5VLX110T FPGA
LDPC (Low Density Parity Check) codes were first reported in 1962 by Gallager as a method of channel coding to defeat noise using low density parity check matrices. LDPC codes were rediscovered in 1996 by MacKay and Neal, who showed they could achieve performance close to Turbo Codes but with lower implementation costs. LDPC codes work by encoding data through matrix operations and decoding by finding the most probable vector that satisfies the parity check equation, with many possible decoding implementations. They have applications in wireless, wired, and optical communications where the code must be designed to balance highest coding gain with ease of implementation based on the throughput requirements and modulation scheme used.
Low power ldpc decoder implementation using layer decodingajithc0003
This document proposes a low-power LDPC decoder implementation using layered decoding. It discusses how LDPC codes can be used for reliable data transmission and are finding increasing use. It describes layered decoding as an efficient approach that can decrease power consumption. The proposed method is vectored layer decoding, which overcomes limitations of traditional layered decoding. It involves encoding data using an LDPC generator matrix derived from the parity check matrix. Simulations were conducted to generate the generator matrix and encode data. The goal is to efficiently implement a low-power LDPC decoder using this vectored layer decoding approach.
High Speed Decoding of Non-Binary Irregular LDPC Codes Using GPUs (Paper)Enrique Monzo Solves
Implementation of a high speed decoding of non-binary irregular LDPC codes using CUDA GPUs.
Moritz Beermann, Enrique Monzó, Laurent Schmalen, Peter Vary
IEEE SiPS, Oct. 2013, Taipei, Taiwan
Instruction level parallelism using ppm branch predictionIAEME Publication
This document summarizes an approach to instruction level parallelism using prediction by partial matching (PPM) branch prediction. It proposes a hybrid PPM-based branch predictor that uses both local and global branch histories. The two predictors are combined using a neural network. Key aspects of the implementation include:
1. Using local and global history PPM predictors and combining their predictions with a neural network.
2. Enhancements to the basic PPM approach like program counter tagging, efficient history encoding using run-length encoding, tracking pattern bias, and dynamic pattern length selection.
3. Details of the global history PPM predictor including the use of tables and linked lists to store patterns of different lengths and handle collisions
This document summarizes a study that analyzed Indian scientific literature in veterinary sciences from 1999-2011. Some key findings include:
- A total of 5,468 publications were analyzed, with the majority (99.09%) being journal articles.
- Research output grew steadily from 1999-2008 but declined in 2009-2010.
- The most common authorship patterns were papers with 3 or 4 authors, indicating collaborative work is prevalent.
- The author with the most publications was Kumar, A. from Punjab Agricultural University with 94 papers.
Numerical computation of eigenenergy and transmission coefficient of symmetri...IAEME Publication
This document summarizes a study on numerically computing the eigenenergy and transmission coefficient of a symmetric quantum double barrier structure with variable effective mass under an applied electric field. The study uses the transfer matrix method to solve Schrodinger's equation for a GaAs/AlxGa1-xAs material system. It finds that eigenenergy decreases nonlinearly with increasing electric field. Transmission coefficient decreases with increasing barrier thickness or height but can occur at lower energies with increasing well thickness. The existence of higher quasi-bound states is also observed.
A new approach for design of cmos based cascode current mirror for asp applic...IAEME Publication
This document discusses a new approach for designing a CMOS-based cascode current mirror circuit for analog signal processing applications. It begins by introducing current mirrors and their importance as core structures in analog, digital, and mixed-signal circuits. It then reviews different configurations of basic current mirror circuits and discusses how cascode configurations can improve performance by maintaining constant voltages. The document proposes an innovative cascode current mirror circuit and evaluates its performance through simulation using a 0.13 micron CMOS technology.
Electromagnetic studies on nano sized magnesium ferriteIAEME Publication
The document summarizes research on the electromagnetic properties of nano-sized magnesium ferrite synthesized using microwave techniques. Key findings include:
1) Magnetic properties were measured using VSM which showed the material has a high coercivity of 785.12 Oe, classifying it as a hard magnetic material.
2) Dielectric measurements found the ac conductivity and dielectric constant decreased with increasing frequency. Both increased with temperature initially before decreasing.
3) The dielectric loss showed expected dispersion behavior, decreasing with frequency and generally increasing with temperature.
4) A high quality factor of 150 was obtained, higher than for bulk ferrites, indicating potential applications in microwave devices.
This document summarizes a study comparing different classification models for identifying liver disease types using patient data. It describes applying four classification algorithms - First Order Inductive Learner (FOIL), Classification Based on Association (CBA), Classification based on Multiple Association Rules (CMAR), and Classification based on Predictive Association Rules (CPAR) - to data on liver function tests, other health factors, and diagnosed disease for each patient. Dimensionality reduction was used as a preprocessing step to remove ambiguous attributes. The models were trained on full patient data and tested on replicated data, with results showing accuracy and training time for each classifier. Analysis focused on using the algorithms to identify viral, alcoholic, and non-alcoholic liver diseases.
Study of model predictive control using ni lab viewIAEME Publication
This document discusses the implementation of model predictive control (MPC) using National Instruments LabVIEW software. It begins with introductions to MPC and LabVIEW. It then covers constructing state space and transfer function models in LabVIEW. Simulation results are presented for MPC applied to first order systems with and without time delay. MPC performance is compared to PID control, showing MPC can handle constraints and optimize process operation while PID cannot. The document concludes MPC simulation using LabVIEW is successful and simulation results are useful for control system design.
This document discusses Kaizen, a philosophy of continuous improvement, and its implementation in an Indian petrochemical plant. [1] It provides background on total quality management (TQM) and defines Kaizen as continuous, gradual improvements involving everyone. [2] The principles of Kaizen emphasize that employees are a company's most important asset and that success comes from consistent, incremental changes rather than occasional radical changes. [3] Kaizen aims to improve all aspects of operations through activities like quality circles, process management, and eliminating waste.
Spatial and temporal study of a mechanical and harmonic vibration by high spe...IAEME Publication
The document summarizes a study that uses high-speed optical interferometry to analyze the spatial and temporal evolution of vibrations in a mechanically excited rectangular metal plate. A high-speed CMOS camera captures 4000 frames per second of the plate's free vibration. An algorithm is used to extract phase maps from the interferograms, showing the plate's deformation over time and allowing reconstruction of the vibration cycle. Simulated results demonstrate the technique's ability to measure an unknown vibration using 12 sample interferograms without synchronization requirements.
Comparison and analysis of combining techniques for spatial multiplexing spac...IAEME Publication
This document compares different combining techniques for space-time block coded systems in Rayleigh fading channels. It finds that maximum ratio combining outperforms other techniques like equal gain combining and selection combining for any space-time block code configuration, providing the best bit error rate. The document provides background on space-time block codes, describes the Alamouti space-time code, and discusses various receive diversity combining techniques.
A Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI ArchitectureIRJET Journal
This document describes a configurable and low power VLSI architecture for a hard-decision Viterbi decoder. It proposes a design that can be configured for different numbers of traceback steps (N) by adjusting traceback parameters without major modifications to the register transfer level design. The design aims to consume low power. It was synthesized in Xilinx and showed good results for operational speed and area consumption when tested for N=32 and N=64 traceback steps. Viterbi decoding is an important error correction technique that involves convolutional encoding, transmission with potential errors, and decoding using the Viterbi algorithm. Low power is a priority for Viterbi decoders due to their power consumption.
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
In this paper we describe novel non-statistical Euclidean distance soft-input, soft-output (SISO) decoding algorithms for the three currently most important error-correcting codes: the low-density parity-check (LDPC), turbo and polar codes. The metric is squared Euclidean distance, and the decoders operate using an antilog-log (AL) process. We have investigated the simulated bit-error rate (BER) performance of these non-statistical algorithms on three channel models: the additive White Gaussian noise (AWGN), the Rayleigh fading and Middleton’s Class-A impulsive noise channels, and compare them with the BER performances of the corresponding statistical decoding algorithms for the three codes and channels. In all cases the performance over the AWGN channel of the non-statistical algorithms is almost the same or slightly better than that of the statistical algorithms. In some cases the performance over the two nonGaussian channels of the non-statistical algorithms is worse than that of the statistical algorithms, but the use of a simple signal amplitude limiter placed before the decoder input significantly improves the actual and relative performances of the algorithms. Thus there is no performance loss, and sometimes a significant performance gain, for the proposed decoding algorithms. A major advantage of our algorithms is that estimation of the channel signal-to-noise ratio is not required, which in practice simplifies system implementation. In addition, we have found that the processing complexity of the non-statistical algorithms is similar or slightly less than that of the corresponding statistical algorithms, and is significantly less for the LDPC codes over all of the channels.
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
In this paper we describe novel non-statistical Euclidean distance soft-input, soft-output (SISO) decoding
algorithms for the three currently most important error-correcting codes: the low-density parity-check
(LDPC), turbo and polar codes. The metric is squared Euclidean distance, and the decoders operate using
an antilog-log (AL) process. We have investigated the simulated bit-error rate (BER) performance of these
non-statistical algorithms on three channel models: the additive White Gaussian noise (AWGN), the
Rayleigh fading and Middleton’s Class-A impulsive noise channels, and compare them with the BER
performances of the corresponding statistical decoding algorithms for the three codes and channels. In all
cases the performance over the AWGN channel of the non-statistical algorithms is almost the same or
slightly better than that of the statistical algorithms. In some cases the performance over the two nonGaussian channels of the non-statistical algorithms is worse than that of the statistical algorithms, but the
use of a simple signal amplitude limiter placed before the decoder input significantly improves the actual
and relative performances of the algorithms. Thus there is no performance loss, and sometimes a
significant performance gain, for the proposed decoding algorithms. A major advantage of our algorithms
is that estimation of the channel signal-to-noise ratio is not required, which in practice simplifies system
implementation. In addition, we have found that the processing complexity of the non-statistical algorithms
is similar or slightly less than that of the corresponding statistical algorithms, and is significantly less for
the LDPC codes over all of the channels.
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
In this paper we describe novel non-statistical Euclidean distance soft-input, soft-output (SISO) decoding
algorithms for the three currently most important error-correcting codes: the low-density parity-check
(LDPC), turbo and polar codes. The metric is squared Euclidean distance, and the decoders operate using
an antilog-log (AL) process. We have investigated the simulated bit-error rate (BER) performance of these
non-statistical algorithms on three channel models: the additive White Gaussian noise (AWGN), the
Rayleigh fading and Middleton’s Class-A impulsive noise channels, and compare them with the BER
performances of the corresponding statistical decoding algorithms for the three codes and channels. In all
cases the performance over the AWGN channel of the non-statistical algorithms is almost the same or
slightly better than that of the statistical algorithms. In some cases the performance over the two nonGaussian channels of the non-statistical algorithms is worse than that of the statistical algorithms, but the
use of a simple signal amplitude limiter placed before the decoder input significantly improves the actual
and relative performances of the algorithms. Thus there is no performance loss, and sometimes a
significant performance gain, for the proposed decoding algorithms. A major advantage of our algorithms
is that estimation of the channel signal-to-noise ratio is not required, which in practice simplifies system
implementation. In addition, we have found that the processing complexity of the non-statistical algorithms
is similar or slightly less than that of the corresponding statistical algorithms, and is significantly less for
the LDPC codes over all of the channels
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
This document describes novel non-statistical soft-input, soft-output (SISO) decoding algorithms for low-density parity-check (LDPC), turbo, and polar codes. These algorithms use squared Euclidean distance as the decoding metric rather than a statistical metric. They also use an "antilog-log" process to combine metrics without requiring knowledge of channel noise variance. Simulation results show the non-statistical algorithms achieve similar or better performance than statistical algorithms over additive white Gaussian noise channels. Over some non-Gaussian channels, performance is comparable after using a simple input limiter. A key advantage is these algorithms do not require estimating the channel signal-to-noise ratio.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
An Energy-Efficient Lut-Log-Bcjr Architecture Using Constant Log Bcjr AlgorithmIJERA Editor
Error correcting codes are used to correct the data from the corrupted signal due to noise and interference. There
are many error correcting codes. Among them turbo codes is considered to be the best because it is very close to
the Shannon theoretical limit. The MAP algorithm is commonly used in the turbo decoder. Among the different
versions of the MAP algorithm Constant log BCJR algorithm have less complexity and good error performance.
The Constant log BCJR algorithm can be easily designed using look up table which reduces the memory
consumption. The proposed Constant log BCJR decoder is designed to decode two blocks of data at a time, this
increases the throughput. The complexity of the decoder is further reduced by the use of the add compare select
(ACS) units and registers. The proposed decoder is simulated using Xilinx ISE and synthesized using Sparten3
FPGA and found out that Constant log BCJR decoder utilized less amount of memory and power than the LUT
log BCJR decoder.
Fpga implementation of (15,7) bch encoder and decoder for text messageeSAT Journals
Abstract In a communication channel, noise and interferences are the two main sources of errors occur during the transmission of the message. Thus, to get the error free communication error control codes are used. This paper discusses, FPGA implementation of (15, 7) BCH Encoder and Decoder for text message using Verilog Hardware Description Language. Initially each character in a text message is converted into binary data of 7 bits. These 7 bits are encoded into 15 bit codeword using (15, 7) BCH encoder. If any 2 bit error in any position of 15 bit codeword, is detected and corrected. This corrected data is converted back into an ASCII character. The decoder is implemented using the Peterson algorithm and Chine’s search algorithm. Simulation was carried out by using Xilinx 12.1 ISE simulator, and verified results for an arbitrarily chosen message data. Synthesis was successfully done by using the RTL compiler, power and area is estimated for 180nm Technology. Finally both encoder and decoder design is implemented on Spartan 3E FPGA. Index Terms: BCH Encoder, BCH Decoder, FPGA, Verilog, Cadence RTL compiler
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A computationally efficient method to find transformed residueiaemedu
The document discusses reducing the computational complexity of intra 4x4 prediction in H.264 video encoding. It first analyzes the complexity of standard intra 4x4 prediction, which involves predicting each 4x4 block using all nine prediction modes. It finds the complexity to be high due to repeating common calculations for each mode. The paper then proposes a novel method to directly determine transformed residue coefficients for intra 4x4 blocks, reducing unnecessary computations and enhancing encoding speed. Experimental results show the new method reduces computational complexity by at least 40% without impacting coding efficiency or performance.
This document summarizes a paper presented at the International Conference on Emerging Trends in Engineering and Management in 2014. The paper proposes a low complexity turbo decoder architecture using a modified Add Compare Select (ACS) unit and registers to implement the Constant log BCJR algorithm. The Constant log BCJR algorithm reduces complexity compared to other MAP decoding algorithms. The proposed decoder is designed to decode two blocks of data simultaneously, increasing throughput. Simulation and synthesis results showed the Constant log BCJR decoder uses less memory and power than an LUT log BCJR decoder.
This document summarizes a research paper that proposes a new binary tree algorithm for implementing a Huffman decoder. It begins by explaining the disadvantages of using an array data structure to represent the Huffman decoding tree and how the proposed binary tree method requires less memory. The proposed decoder is then implemented using ASIC and FPGA design tools. Performance metrics like power, area, and number of registers are obtained and compared between the ASIC and FPGA implementations. Simulation results show that the ASIC implementation has lower power consumption than the FPGA version. In conclusion, the binary tree algorithm is shown to improve memory usage for Huffman decoding.
IRJET-Error Detection and Correction using Turbo CodesIRJET Journal
This document summarizes a research paper on using turbo codes for error detection and correction. It discusses:
1) Turbo codes use parallel convolutional encoders separated by an interleaver to achieve near-Shannon limit performance with forward error correction. The encoding and decoding of text and images is described.
2) Decoding is done iteratively using maximum log-map or log-map algorithms to calculate reliability metrics and soft outputs for error correction.
3) The encoding process involves two recursive systematic convolutional encoders with an interleaver between. Decoding is also iterative and uses log-map type algorithms to calculate branch metrics and state metrics to output soft decisions.
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMVLSICS Design
Low Power is an extremely important issue for future mobile communication systems; The focus of this paper is to implementat turbo codes for low power solutions. The effect on performance due to variation in parameters like frame length, number of iterations, type of encoding scheme and type of the interleave in the presence of additive white Gaussian noise is studied with the floating point model. In order to obtain the effect of quantization and word length variation, a fixed point model of the application is also developed.. The application performance measure, namely bit-error rate (BER) is used as a design constraint while optimizing for power and area coverage. Low power Optimization is Performed on Implementation levels by the use of Voltage scaling. With those Techniques we can reduced the power 98.5%and Area(LUT) is 57% and speed grade is Increased .This type of Power maneger is proposed and implemented based on the timing details of the turbo decoder in the VHDL model.
IMPROVING PSNR AND PROCESSING SPEED FOR HEVC USING HYBRID PSO FOR INTRA FRAME...ijma
High efficiency video coding (HEVC) is the newest video codec to increase significantly the coding
efficiency of its ancestor H.264/Advance Video Coding. However, the HEVC delivers a highly increased
computation complexity. In this paper, a coding unit partitioning pattern optimization method based on
particle swarm optimization (PSO) is proposed to reduce the computational complexity of hierarchical
quadtree-based coding unit partitioning. The required coding unit partitioning pattern for exhaustive
partitioning and the rate distortion cost are efficiently considered as the chromosome and the fitness
function of the PSO, respectively. To reduce the computational time, the cellular automata-based (CA)
rule based time limit is used in order to find out the best possible modes of operation. Compared to the
current state of the art algorithms, this scheme is computationally simple and achieves superior
reconstructed video quality (12% increase in PSNR compared to existing methods) at less computational
complexity (overall delay by 40%), Increasing the bandwidth and reducing the errors.
Exploiting 2-Dimensional Source Correlation in Channel Decoding with Paramete...IJECEIAES
The document describes a proposed joint source-channel coding (JSCC) system that exploits 2-dimensional source correlation in channel decoding with parameter estimation. The system uses a modified Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm at the decoder to exploit source correlation on rows and columns of a 2D source. A parameter estimation technique based on the Baum-Welch algorithm is used jointly with the decoder to estimate source correlation parameters at the receiver since these parameters are not always known in practice. Simulation results show that the proposed coding scheme that performs joint decoding and parameter estimation performs very close to an ideal 2D JSCC system with perfect knowledge of source correlation parameters.
IMPROVING PSNR AND PROCESSING SPEED FOR HEVC USING HYBRID PSO FOR INTRA FRAME...ijma
This document summarizes a research paper that proposes a new method to improve the efficiency of the HEVC video coding standard. The proposed method uses particle swarm optimization (PSO) to optimize the coding unit partitioning patterns in HEVC in order to reduce computational complexity. Compared to existing algorithms, the proposed PSO-based method achieves a 12% increase in PSNR video quality while reducing computational complexity by 40% and overall encoding delay. The method integrates this optimized intra-frame prediction into the HEVC encoding and decoding process.
IMPROVING PSNR AND PROCESSING SPEED FOR HEVC USING HYBRID PSO FOR INTRA FRAME...ijma
High efficiency video coding (HEVC) is the newest video codec to increase significantly the coding
efficiency of its ancestor H.264/Advance Video Coding. However, the HEVC delivers a highly increased
computation complexity. In this paper, a coding unit partitioning pattern optimization method based on
particle swarm optimization (PSO) is proposed to reduce the computational complexity of hierarchical
quadtree-based coding unit partitioning. The required coding unit partitioning pattern for exhaustive
partitioning and the rate distortion cost are efficiently considered as the chromosome and the fitness
function of the PSO, respectively. To reduce the computational time, the cellular automata-based (CA)
rule based time limit is used in order to find out the best possible modes of operation. Compared to the
current state of the art algorithms, this scheme is computationally simple and achieves superior
reconstructed video quality (12% increase in PSNR compared to existing methods) at less computational
complexity (overall delay by 40%), Increasing the bandwidth and reducing the errors..
Implementation of Carry Skip Adder using PTLIRJET Journal
The document proposes a design and implementation of a carry skip adder using pass transistor logic to improve performance over a conventional carry skip adder. It describes the structures of a conventional carry skip adder and the proposed pass transistor logic carry skip adder. Simulation results show that the proposed design reduces the number of transistors, area, delay, and average power compared to the conventional carry skip adder.
In this paper, we describe an FPGA H.264/AVC encoder architecture performing at real-time. To reduce the critical path length and to increase throughput, the encoder uses a parallel and pipeline architecture and all modules have been optimized with respect the area cost. Our design is described in VHDL and synthesized to Altera Stratix III FPGA. The throughput of the FPGA architecture reaches a processing rate higher than 177 million of pixels per second at 130 MHz, permitting its use in H.264/AVC standard directed to HDTV.
Similar to Estimation of bitlength of transformed quantized residue (20)
Submission Deadline: 30th September 2022
Acceptance Notification: Within Three Days’ time period
Online Publication: Within 24 Hrs. time Period
Expected Date of Dispatch of Printed Journal: 5th October 2022
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
White layer thickness (WLT) formed and surface roughness in wire electric discharge turning (WEDT) of tungsten carbide composite has been made to model through response surface methodology (RSM). A Taguchi’s standard Design of experiments involving five input variables with three levels has been employed to establish a mathematical model between input parameters and responses. Percentage of cobalt content, spindle speed, Pulse on-time, wire feed and pulse off-time were changed during the experimental tests based on the Taguchi’s orthogonal array L27 (3^13). Analysis of variance (ANOVA) revealed that the mathematical models obtained can adequately describe performance within the parameters of the factors considered. There was a good agreement between the experimental and predicted values in this study.
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
The study explores the reasons for a transgender to become entrepreneurs. In this study transgender entrepreneur was taken as independent variable and reasons to become as dependent variable. Data were collected through a structured questionnaire containing a five point Likert Scale. The study examined the data of 30 transgender entrepreneurs in Salem Municipal Corporation of Tamil Nadu State, India. Simple Random sampling technique was used. Garrett Ranking Technique (Percentile Position, Mean Scores) was used as the analysis for the present study to identify the top 13 stimulus factors for establishment of trans entrepreneurial venture. Economic advancement of a nation is governed upon the upshot of a resolute entrepreneurial doings. The conception of entrepreneurship has stretched and materialized to the socially deflated uncharted sections of transgender community. Presently transgenders have smashed their stereotypes and are making recent headlines of achievements in various fields of our Indian society. The trans-community is gradually being observed in a new light and has been trying to achieve prospective growth in entrepreneurship. The findings of the research revealed that the optimistic changes are taking place to change affirmative societal outlook of the transgender for entrepreneurial ventureship. It also laid emphasis on other transgenders to renovate their traditional living. The paper also highlights that legislators, supervisory body should endorse an impartial canons and reforms in Tamil Nadu Transgender Welfare Board Association.
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
Since ages gender difference is always a debatable theme whether caused by nature, evolution or environment. The birth of a transgender is dreadful not only for the child but also for their parents. The pain of living in the wrong physique and treated as second class victimized citizen is outrageous and fully harboured with vicious baseless negative scruples. For so long, social exclusion had perpetuated inequality and deprivation experiencing ingrained malign stigma and besieged victims of crime or violence across their life spans. They are pushed into the murky way of life with a source of eternal disgust, bereft sexual potency and perennial fear. Although they are highly visible but very little is known about them. The common public needs to comprehend the ravaged arrogance on these insensitive souls and assist in integrating them into the mainstream by offering equal opportunity, treat with humanity and respect their dignity. Entrepreneurship in the current age is endorsing the gender fairness movement. Unstable careers and economic inadequacy had inclined one of the gender variant people called Transgender to become entrepreneurs. These tiny budding entrepreneurs resulted in economic transition by means of employment, free from the clutches of stereotype jobs, raised standard of living and handful of financial empowerment. Besides all these inhibitions, they were able to witness a platform for skill set development that ignited them to enter into entrepreneurial domain. This paper epitomizes skill sets involved in trans-entrepreneurs of Thoothukudi Municipal Corporation of Tamil Nadu State and is a groundbreaking determination to sightsee various skills incorporated and the impact on entrepreneurship.
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
The banking and financial services industries are experiencing increased technology penetration. Among them, the banking industry has made technological advancements to better serve the general populace. The economy focused on transforming the banking sector's system into a cashless, paperless, and faceless one. The researcher wants to evaluate the user's intention for utilising a mobile banking application. The study also examines the variables affecting the user's behaviour intention when selecting specific applications for financial transactions. The researcher employed a well-structured questionnaire and a descriptive study methodology to gather the respondents' primary data utilising the snowball sampling technique. The study includes variables like performance expectations, effort expectations, social impact, enabling circumstances, and perceived risk. Each of the aforementioned variables has a major impact on how users utilise mobile banking applications. The outcome will assist the service provider in comprehending the user's history with mobile banking applications.
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
Technology upgradation in banking sector took the economy to view that payment mode towards online transactions using mobile applications. This system enabled connectivity between banks, Merchant and user in a convenient mode. there are various applications used for online transactions such as Google pay, Paytm, freecharge, mobikiwi, oxygen, phonepe and so on and it also includes mobile banking applications. The study aimed at evaluating the predilection of the user in adopting digital transaction. The study is descriptive in nature. The researcher used random sample techniques to collect the data. The findings reveal that mobile applications differ with the quality of service rendered by Gpay and Phonepe. The researcher suggest the Phonepe application should focus on implementing the application should be user friendly interface and Gpay on motivating the users to feel the importance of request for money and modes of payments in the application.
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
The prototype of a voice-based ATM for visually impaired using Arduino is to help people who are blind. This uses RFID cards which contain users fingerprint encrypted on it and interacts with the users through voice commands. ATM operates when sensor detects the presence of one person in the cabin. After scanning the RFID card, it will ask to select the mode like –normal or blind. User can select the respective mode through voice input, if blind mode is selected the balance check or cash withdraw can be done through voice input. Normal mode procedure is same as the existing ATM.
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
There is increasing acceptability of emotional intelligence as a major factor in personality assessment and effective human resource management. Emotional intelligence as the ability to build capacity, empathize, co-operate, motivate and develop others cannot be divorced from both effective performance and human resource management systems. The human person is crucial in defining organizational leadership and fortunes in terms of challenges and opportunities and walking across both multinational and bilateral relationships. The growing complexity of the business world requires a great deal of self-confidence, integrity, communication, conflict and diversity management to keep the global enterprise within the paths of productivity and sustainability. Using the exploratory research design and 255 participants the result of this original study indicates strong positive correlation between emotional intelligence and effective human resource management. The paper offers suggestions on further studies between emotional intelligence and human capital development and recommends for conflict management as an integral part of effective human resource management.
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
Our life journey, in general, is closely defined by the way we understand the meaning of why we coexist and deal with its challenges. As we develop the "inspiration economy", we could say that nearly all of the challenges we have faced are opportunities that help us to discover the rest of our journey. In this note paper, we explore how being faced with the opportunity of being a close carer for an aging parent with dementia brought intangible discoveries that changed our insight of the meaning of the rest of our life journey.
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
The main objective of this study is to analyze the impact of aspects of Organizational Culture on the Effectiveness of the Performance Management System (PMS) in the Health Care Organization at Thanjavur. Organizational Culture and PMS play a crucial role in present-day organizations in achieving their objectives. PMS needs employees’ cooperation to achieve its intended objectives. Employees' cooperation depends upon the organization’s culture. The present study uses exploratory research to examine the relationship between the Organization's culture and the Effectiveness of the Performance Management System. The study uses a Structured Questionnaire to collect the primary data. For this study, Thirty-six non-clinical employees were selected from twelve randomly selected Health Care organizations at Thanjavur. Thirty-two fully completed questionnaires were received.
Living in 21st century in itself reminds all of us the necessity of police and its administration. As more and more we are entering into the modern society and culture, the more we require the services of the so called ‘Khaki Worthy’ men i.e., the police personnel. Whether we talk of Indian police or the other nation’s police, they all have the same recognition as they have in India. But as already mentioned, their services and requirements are different after the like 26th November, 2008 incidents, where they without saving their own lives has sacrificed themselves without any hitch and without caring about their respective family members and wards. In other words, they are like our heroes and mentors who can guide us from the darkness of fear, militancy, corruption and other dark sides of life and so on. Now the question arises, if Gandhi would have been alive today, what would have been his reaction/opinion to the police and its functioning? Would he have some thing different in his mind now what he had been in his mind before the partition or would he be going to start some Satyagraha in the form of some improvement in the functioning of the police administration? Really these questions or rather night mares can come to any one’s mind, when there is too much confusion is prevailing in our minds, when there is too much corruption in the society and when the polices working is also in the questioning because of one or the other case throughout the India. It is matter of great concern that we have to thing over our administration and our practical approach because the police personals are also like us, they are part and parcel of our society and among one of us, so why we all are pin pointing towards them.
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
The goal of this study was to see how talent management affected employee retention in the selected IT organizations in Chennai. The fundamental issue was the difficulty to attract, hire, and retain talented personnel who perform well and the gap between supply and demand of talent acquisition and retaining them within the firms. The study's main goals were to determine the impact of talent management on employee retention in IT companies in Chennai, investigate talent management strategies that IT companies could use to improve talent acquisition, performance management, career planning and formulate retention strategies that the IT firms could use. The respondents were given a structured close-ended questionnaire with the 5 Point Likert Scale as part of the study's quantitative research design. The target population consisted of 289 IT professionals. The questionnaires were distributed and collected by the researcher directly. The Statistical Package for Social Sciences (SPSS) was used to collect and analyse the questionnaire responses. Hypotheses that were formulated for the various areas of the study were tested using a variety of statistical tests. The key findings of the study suggested that talent management had an impact on employee retention. The studies also found that there is a clear link between the implementation of talent management and retention measures. Management should provide enough training and development for employees, clarify job responsibilities, provide adequate remuneration packages, and recognise employees for exceptional performance.
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
Globally, Millions of dollars were spent by the organizations for employing skilled Information Technology (IT) professionals. It is costly to replace unskilled employees with IT professionals possessing technical skills and competencies that aid in interconnecting the business processes. The organization’s employment tactics were forced to alter by globalization along with technological innovations as they consistently diminish to remain lean, outsource to concentrate on core competencies along with restructuring/reallocate personnel to gather efficiency. As other jobs, organizations or professions have become reasonably more appropriate in a shifting employment landscape, the above alterations trigger both involuntary as well as voluntary turnover. The employee view on jobs is also afflicted by the COVID-19 pandemic along with the employee-driven labour market. So, having effective strategies is necessary to tackle the withdrawal rate of employees. By associating Emotional Intelligence (EI) along with Talent Management (TM) in the IT industry, the rise in attrition rate was analyzed in this study. Only 303 respondents were collected out of 350 participants to whom questionnaires were distributed. From the employees of IT organizations located in Bangalore (India), the data were congregated. A simple random sampling methodology was employed to congregate data as of the respondents. Generating the hypothesis along with testing is eventuated. The effect of EI and TM along with regression analysis between TM and EI was analyzed. The outcomes indicated that employee and Organizational Performance (OP) were elevated by effective EI along with TM.
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
By implementing talent management strategy, organizations would have the option to retain their skilled professionals while additionally working on their overall performance. It is the course of appropriately utilizing the ideal individuals, setting them up for future top positions, exploring and dealing with their performance, and holding them back from leaving the organization. It is employee performance that determines the success of every organization. The firm quickly obtains an upper hand over its rivals in the event that its employees having particular skills that cannot be duplicated by the competitors. Thus, firms are centred on creating successful talent management practices and processes to deal with the unique human resources. Firms are additionally endeavouring to keep their top/key staff since on the off chance that they leave; the whole store of information leaves the firm's hands. The study's objective was to determine the impact of talent management on organizational performance among the selected IT organizations in Chennai. The study recommends that talent management limitedly affects performance. On the off chance that this talent is appropriately management and implemented properly, organizations might benefit as much as possible from their maintained assets to support development and productivity, both monetarily and non-monetarily.
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
Banking regulations act of India, 1949 defines banking as “acceptance of deposits for the purpose of lending or investment from the public, repayment on demand or otherwise and withdrawable through cheques, drafts order or otherwise”, the major participants of the Indian financial system are commercial banks, the financial institution encompassing term lending institutions. Investments institutions, specialized financial institution and the state level development banks, non banking financial companies (NBFC) and other market intermediaries such has the stock brokers and money lenders are among the oldest of the certain variants of NBFC and the oldest market participants. The asset quality of banks is one of the most important indicators of their financial health. The Indian banking sector has been facing severe problems of increasing Non- Performing Assets (NPAs). The NPAs growth directly and indirectly affects the quality of assets and profitability of banks. It also shows the efficiency of banks credit risk management and the recovery effectiveness. NPA do not generate any income, whereas, the bank is required to make provisions for such as assets that why is a double edge weapon. This paper outlines the concept of quality of bank loans of different types like Housing, Agriculture and MSME loans in state Haryana of selected public and private sector banks. This study is highlighting problems associated with the role of commercial bank in financing Small and Medium Scale Enterprises (SME). The overall objective of the research was to assess the effect of the financing provisions existing for the setting up and operations of MSMEs in the country and to generate recommendations for more robust financing mechanisms for successful operation of the MSMEs, in turn understanding the impact of MSME loans on financial institutions due to NPA. There are many research conducted on the topic of Non- Performing Assets (NPA) Management, concerning particular bank, comparative study of public and private banks etc. In this paper the researcher is considering the aggregate data of selected public sector and private sector banks and attempts to compare the NPA of Housing, Agriculture and MSME loans in state Haryana of public and private sector banks. The tools used in the study are average and Anova test and variance. The findings reveal that NPA is common problem for both public and private sector banks and is associated with all types of loans either that is housing loans, agriculture loans and loans to SMES. NPAs of both public and private sector banks show the increasing trend. In 2010-11 GNPA of public and private sector were at same level it was 2% but after 2010-11 it increased in many fold and at present there is GNPA in some more than 15%. It shows the dark area of Indian banking sector.
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
An experiment conducted in this study found that BaSO4 changed Nylon 6's mechanical properties. By changing the weight ratios, BaSO4 was used to make Nylon 6. This Researcher looked into how hard Nylon-6/BaSO4 composites are and how well they wear. Experiments were done based on Taguchi design L9. Nylon-6/BaSO4 composites can be tested for their hardness number using a Rockwell hardness testing apparatus. On Nylon/BaSO4, the wear behavior was measured by a wear monitor, pinon-disc friction by varying reinforcement, sliding speed, and sliding distance, and the microstructure of the crack surfaces was observed by SEM. This study provides significant contributions to ultimate strength by increasing BaSO4 content up to 16% in the composites, and sliding speed contributes 72.45% to the wear rate
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
The majority of the population in India lives in villages. The village is the back bone of the country. Village or rural industries play an important role in the national economy, particularly in the rural development. Developing the rural economy is one of the key indicators towards a country’s success. Whether it be the need to look after the welfare of the farmers or invest in rural infrastructure, Governments have to ensure that rural development isn’t compromised. The economic development of our country largely depends on the progress of rural areas and the standard of living of rural masses. Village or rural industries play an important role in the national economy, particularly in the rural development. Rural entrepreneurship is based on stimulating local entrepreneurial talent and the subsequent growth of indigenous enterprises. It recognizes opportunity in the rural areas and accelerates a unique blend of resources either inside or outside of agriculture. Rural entrepreneurship brings an economic value to the rural sector by creating new methods of production, new markets, new products and generate employment opportunities thereby ensuring continuous rural development. Social Entrepreneurship has the direct and primary objective of serving the society along with the earning profits. So, social entrepreneurship is different from the economic entrepreneurship as its basic objective is not to earn profits but for providing innovative solutions to meet the society needs which are not taken care by majority of the entrepreneurs as they are in the business for profit making as a sole objective. So, the Social Entrepreneurs have the huge growth potential particularly in the developing countries like India where we have huge societal disparities in terms of the financial positions of the population. Still 22 percent of the Indian population is below the poverty line and also there is disparity among the rural & urban population in terms of families living under BPL. 25.7 percent of the rural population & 13.7 percent of the urban population is under BPL which clearly shows the disparity of the poor people in the rural and urban areas. The need to develop social entrepreneurship in agriculture is dictated by a large number of social problems. Such problems include low living standards, unemployment, and social tension. The reasons that led to the emergence of the practice of social entrepreneurship are the above factors. The research problem lays upon disclosing the importance of role of social entrepreneurship in rural development of India. The paper the tendencies of social entrepreneurship in India, to present successful examples of such business for providing recommendations how to improve situation in rural areas in terms of social entrepreneurship development. Indian government has made some steps towards development of social enterprises, social entrepreneurship, and social in- novation, but a lot remains to be improved.
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
Distribution system is a critical link between the electric power distributor and the consumers. Most of the distribution networks commonly used by the electric utility is the radial distribution network. However in this type of network, it has technical issues such as enormous power losses which affect the quality of the supply. Nowadays, the introduction of Distributed Generation (DG) units in the system help improve and support the voltage profile of the network as well as the performance of the system components through power loss mitigation. In this study network reconfiguration was done using two meta-heuristic algorithms Particle Swarm Optimization and Gravitational Search Algorithm (PSO-GSA) to enhance power quality and voltage profile in the system when simultaneously applied with the DG units. Backward/Forward Sweep Method was used in the load flow analysis and simulated using the MATLAB program. Five cases were considered in the Reconfiguration based on the contribution of DG units. The proposed method was tested using IEEE 33 bus system. Based on the results, there was a voltage profile improvement in the system from 0.9038 p.u. to 0.9594 p.u.. The integration of DG in the network also reduced power losses from 210.98 kW to 69.3963 kW. Simulated results are drawn to show the performance of each case.
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
Manufacturing industries have witnessed an outburst in productivity. For productivity improvement manufacturing industries are taking various initiatives by using lean tools and techniques. However, in different manufacturing industries, frugal approach is applied in product design and services as a tool for improvement. Frugal approach contributed to prove less is more and seems indirectly contributing to improve productivity. Hence, there is need to understand status of frugal approach application in manufacturing industries. All manufacturing industries are trying hard and putting continuous efforts for competitive existence. For productivity improvements, manufacturing industries are coming up with different effective and efficient solutions in manufacturing processes and operations. To overcome current challenges, manufacturing industries have started using frugal approach in product design and services. For this study, methodology adopted with both primary and secondary sources of data. For primary source interview and observation technique is used and for secondary source review has done based on available literatures in website, printed magazines, manual etc. An attempt has made for understanding application of frugal approach with the study of manufacturing industry project. Manufacturing industry selected for this project study is Mahindra and Mahindra Ltd. This paper will help researcher to find the connections between the two concepts productivity improvement and frugal approach. This paper will help to understand significance of frugal approach for productivity improvement in manufacturing industry. This will also help to understand current scenario of frugal approach in manufacturing industry. In manufacturing industries various process are involved to deliver the final product. In the process of converting input in to output through manufacturing process productivity plays very critical role. Hence this study will help to evolve status of frugal approach in productivity improvement programme. The notion of frugal can be viewed as an approach towards productivity improvement in manufacturing industries.
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
In this paper, we investigated a queuing model of fuzzy environment-based a multiple channel queuing model (M/M/C) ( /FCFS) and study its performance under realistic conditions. It applies a nonagonal fuzzy number to analyse the relevant performance of a multiple channel queuing model (M/M/C) ( /FCFS). Based on the sub interval average ranking method for nonagonal fuzzy number, we convert fuzzy number to crisp one. Numerical results reveal that the efficiency of this method. Intuitively, the fuzzy environment adapts well to a multiple channel queuing models (M/M/C) ( /FCFS) are very well.