International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
LDPC BASED ERROR CORRECTION WITH BIT LEVEL AND SYMBOL LEVEL SYNCHRONIZATION USING MARKER CODE OPTIMIZATION
Low-density parity check code with error-correction capabilities and Marker code for synchronization purposes are used
The marker code structures offer the ultimate achievable rate when standard bit-level synchronization are performed
Symbol-level synchronization algorithm works on group of bits and show how it improves the achievable rate along with the error rate performance
When multiple pass decoding is performed the extrinsic information transfer (EXIT) charts are used to analyze the receiver
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A new channel coding technique to approach the channel capacityijwmn
After Shannon’s 1948 channel coding theorem, we have witnessed many channel coding techniques developed to achieve the Shannon limit. A wide range of channel codes is available with different complexity levels and error correction performance. Many powerful coding schemes have been deployed in the power-limited Additive White Gaussian Noise (AWGN) channel. However, it seems like we have arrived at an end of advancement path, for most of the existing channel codes. This article introduces a new coding technique that can either be used as the last coding stage of concatenated coding scheme or in parallel configuration with other powerful channel codes to achieve reliable error performance with moderately complex decoding. We will go through an example to understand the overall approach of the proposed coding technique, and finally we will look at some simulation results over an AWGN channel to demonstrate its potential.
LDPC BASED ERROR CORRECTION WITH BIT LEVEL AND SYMBOL LEVEL SYNCHRONIZATION USING MARKER CODE OPTIMIZATION
Low-density parity check code with error-correction capabilities and Marker code for synchronization purposes are used
The marker code structures offer the ultimate achievable rate when standard bit-level synchronization are performed
Symbol-level synchronization algorithm works on group of bits and show how it improves the achievable rate along with the error rate performance
When multiple pass decoding is performed the extrinsic information transfer (EXIT) charts are used to analyze the receiver
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A new channel coding technique to approach the channel capacityijwmn
After Shannon’s 1948 channel coding theorem, we have witnessed many channel coding techniques developed to achieve the Shannon limit. A wide range of channel codes is available with different complexity levels and error correction performance. Many powerful coding schemes have been deployed in the power-limited Additive White Gaussian Noise (AWGN) channel. However, it seems like we have arrived at an end of advancement path, for most of the existing channel codes. This article introduces a new coding technique that can either be used as the last coding stage of concatenated coding scheme or in parallel configuration with other powerful channel codes to achieve reliable error performance with moderately complex decoding. We will go through an example to understand the overall approach of the proposed coding technique, and finally we will look at some simulation results over an AWGN channel to demonstrate its potential.
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
BCH Decoder Implemented On CMOS/Nano Device Digital Memories for Fault Tolera...inventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
An Efficient Fault Tolerance System Design for Cmos/Nanodevice Digital MemoriesIJERA Editor
Targeting on the future fault-prone hybrid CMOS/Nanodevice digital memories, this paper present two faulttolerance
design approaches the integrally address the tolerance for defect and transient faults. These two
approaches share several key features, including the use of a group of Bose-Chaudhuri- Hocquenghem (BCH)
codes for both defect tolerance and transient fault tolerance, and integration of BCH code selection and dynamic
logical-to-physical address mapping. Thus, a new model of BCH decoder is proposed to reduce the area and
simplify the computational scheduling of both syndrome and chien search blocks without parallelism leading to
high throughput.The goal of fault tolerant computing is improve the dependability of systems where
dependability can be defined as the ability of a system to deliver service at an acceptable level of confidence in
either presence or absence falult.ss The results of the simulation and implementation using Xilinx ISE software
and the LCD screen on the FPGA’s Board will be shown at last.
Survey on Error Control Coding TechniquesIJTET Journal
Abstract - Error Control Coding techniques used to ensure that the information received is correct and has not been corrupted, owing to the environmental defects and noises occurring during transmission or the data read operation from Memory. Environmental interference and physical effects defects in the communication medium can cause random bit errors during data transmission. While, data corruption means that the detection and correction of bytes by applying modern coding techniques. Error control coding divided into automatic repeat request (ARQ) and forward error correction (FEC).First of all, In ARQ, when the receiver detects an error in the receiver; it requests back the sender to retransmit the data. Second, FEC deals with system of adding redundant data in a message and also it can be recovered by a receiver even when a number of errors were introduce either during the process of data transmission, or on the storage. Therefore, error detection and correction of burst errors can be obtained by Reed-Solomon code. Moreover, the Low-Density Parity Check code furnishes outstanding performance that sparingly near to the Shannon limit.
OPTICAL SWITCHING CONTROLLER USING FPGA AS A CONTROLLER FOR OCDMA ENCODER SYSTEMEditor IJCATR
This paper proposed a design of optical switching controller using FPGA for OCDMA encoder system. The encoder is one
of the new technologies that use to transmit the coded data in the optical communication system by using FPGA and optical switches.
It is providing a high security for data transmission due to all data will be transmitting in binary code form. The output signals from
FPGA are coded with a binary code that given to an optical switch before it signal modulate with the carrier and transmit to the
receiver. In this paper, AA and 55 data were used for source 1 and source 2. It is generated sample data and sent packet data to the
FPGA and stored it into RAM. The simulation results have done by using software Verilog Spartan 2 programming to simulate. After
that the output will produces at waveform to display the output. The main function of FPGA controlling unit is producing single pulse
and configuring optical switching system.
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This paper presents the bit error rate performance of the low density parity check (LDPC) with the concatenation of convolutional channel coding based orthogonal frequency-division-multiplexing (OFDM) using space time block coded (STBC). The OFDM wireless communication system incorporates 3/4rated convolutional encoder under various digital modulations (BPSK, QPSK and QAM) over an additative white gaussian noise (AWGN) and fading (Raleigh and Rician) channels. At the receiving section of the simulated system, Maximum Ratio combining (MRC) channel equalization technique has been implemented to extract transmitted symbols without enhancing noise power.
FPGA Implementation of LDPC Encoder for Terrestrial TelevisionAI Publications
The increasing data rates in digital television networks increase the demands on data capacity of the current transmission channels. Through new standards, the capacity of existing channels is increased with new methods of error correction coding and modulation. In this work, Low Density Parity Check (LDPC) codes are implemented for their error correcting capability. LDPC is a linear error correcting code. These linear error correcting codes are used for transmitting a message over a noisy transmission channel. LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over noisy channels. These codes are capable of performing near to Shannon limit performance and have low decoding complexity. LDPC uses parity check matrix for its encoding and decoding purpose. The main advantage of the parity check matrix is that it helps in detecting and correcting errors which is a very important advantage against noisy channels. This work presents the design and implementation of a LDPC encoder for transmission of digital terrestrial television according to the Chinese DTMB standard. The system is written in Verilog and is implemented on FPGA. The whole work is then verified with the help of Matlab modelling.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
Hybrid Photovoltaic and thermoelectric systems more effectively converts solar energy into electrical energy. Two sources of energy are used one of the energy is solar,that converts radiant light into electrical energy and heat energy which will convert heat into electricity.Photovoltaic cells and thermoelectric modules are used to capture and convert the energy into electricity.Furthermore solar-thermoelectric hybrid system is environmental friendly and has no harmful emissions.Solar-thermoelectric hybrid system increases the overall reliability without sacrificing the quality of power generated.In this paper an overview of the previous research and development of technological advancement in the solar-thermoelectric hybrid systems is presented.
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
BCH Decoder Implemented On CMOS/Nano Device Digital Memories for Fault Tolera...inventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
An Efficient Fault Tolerance System Design for Cmos/Nanodevice Digital MemoriesIJERA Editor
Targeting on the future fault-prone hybrid CMOS/Nanodevice digital memories, this paper present two faulttolerance
design approaches the integrally address the tolerance for defect and transient faults. These two
approaches share several key features, including the use of a group of Bose-Chaudhuri- Hocquenghem (BCH)
codes for both defect tolerance and transient fault tolerance, and integration of BCH code selection and dynamic
logical-to-physical address mapping. Thus, a new model of BCH decoder is proposed to reduce the area and
simplify the computational scheduling of both syndrome and chien search blocks without parallelism leading to
high throughput.The goal of fault tolerant computing is improve the dependability of systems where
dependability can be defined as the ability of a system to deliver service at an acceptable level of confidence in
either presence or absence falult.ss The results of the simulation and implementation using Xilinx ISE software
and the LCD screen on the FPGA’s Board will be shown at last.
Survey on Error Control Coding TechniquesIJTET Journal
Abstract - Error Control Coding techniques used to ensure that the information received is correct and has not been corrupted, owing to the environmental defects and noises occurring during transmission or the data read operation from Memory. Environmental interference and physical effects defects in the communication medium can cause random bit errors during data transmission. While, data corruption means that the detection and correction of bytes by applying modern coding techniques. Error control coding divided into automatic repeat request (ARQ) and forward error correction (FEC).First of all, In ARQ, when the receiver detects an error in the receiver; it requests back the sender to retransmit the data. Second, FEC deals with system of adding redundant data in a message and also it can be recovered by a receiver even when a number of errors were introduce either during the process of data transmission, or on the storage. Therefore, error detection and correction of burst errors can be obtained by Reed-Solomon code. Moreover, the Low-Density Parity Check code furnishes outstanding performance that sparingly near to the Shannon limit.
OPTICAL SWITCHING CONTROLLER USING FPGA AS A CONTROLLER FOR OCDMA ENCODER SYSTEMEditor IJCATR
This paper proposed a design of optical switching controller using FPGA for OCDMA encoder system. The encoder is one
of the new technologies that use to transmit the coded data in the optical communication system by using FPGA and optical switches.
It is providing a high security for data transmission due to all data will be transmitting in binary code form. The output signals from
FPGA are coded with a binary code that given to an optical switch before it signal modulate with the carrier and transmit to the
receiver. In this paper, AA and 55 data were used for source 1 and source 2. It is generated sample data and sent packet data to the
FPGA and stored it into RAM. The simulation results have done by using software Verilog Spartan 2 programming to simulate. After
that the output will produces at waveform to display the output. The main function of FPGA controlling unit is producing single pulse
and configuring optical switching system.
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This paper presents the bit error rate performance of the low density parity check (LDPC) with the concatenation of convolutional channel coding based orthogonal frequency-division-multiplexing (OFDM) using space time block coded (STBC). The OFDM wireless communication system incorporates 3/4rated convolutional encoder under various digital modulations (BPSK, QPSK and QAM) over an additative white gaussian noise (AWGN) and fading (Raleigh and Rician) channels. At the receiving section of the simulated system, Maximum Ratio combining (MRC) channel equalization technique has been implemented to extract transmitted symbols without enhancing noise power.
FPGA Implementation of LDPC Encoder for Terrestrial TelevisionAI Publications
The increasing data rates in digital television networks increase the demands on data capacity of the current transmission channels. Through new standards, the capacity of existing channels is increased with new methods of error correction coding and modulation. In this work, Low Density Parity Check (LDPC) codes are implemented for their error correcting capability. LDPC is a linear error correcting code. These linear error correcting codes are used for transmitting a message over a noisy transmission channel. LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over noisy channels. These codes are capable of performing near to Shannon limit performance and have low decoding complexity. LDPC uses parity check matrix for its encoding and decoding purpose. The main advantage of the parity check matrix is that it helps in detecting and correcting errors which is a very important advantage against noisy channels. This work presents the design and implementation of a LDPC encoder for transmission of digital terrestrial television according to the Chinese DTMB standard. The system is written in Verilog and is implemented on FPGA. The whole work is then verified with the help of Matlab modelling.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
Hybrid Photovoltaic and thermoelectric systems more effectively converts solar energy into electrical energy. Two sources of energy are used one of the energy is solar,that converts radiant light into electrical energy and heat energy which will convert heat into electricity.Photovoltaic cells and thermoelectric modules are used to capture and convert the energy into electricity.Furthermore solar-thermoelectric hybrid system is environmental friendly and has no harmful emissions.Solar-thermoelectric hybrid system increases the overall reliability without sacrificing the quality of power generated.In this paper an overview of the previous research and development of technological advancement in the solar-thermoelectric hybrid systems is presented.
Effect of Chemical Reaction and Radiation Absorption on Unsteady Convective H...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Abstract: most network charges are based on components’
thermal limits providing correct economic signals to
reinforcing network transformers and lines. However, less
attention is drawn to the reinforcement cost driven by
nodal voltage limits, particularly those resulting from
contingencies. In this work, a new charging approach is
proposed in which busbar power perturbation is linked to
busbar voltage degradation rate which, in turn, is related
to the incremental investment cost required to maintain
voltage levels. The incremental cost results by employing
the use of the nodal voltage spare capacity to gauge the
time to invest in a reactive power compensation device for
a defined load growth rate. The time to invest takes into
account the network nodal voltage profiles under N-1
circuit contingencies (one line outage at a time). Further,
the nodal MW and MVAr perturbations are considered in
this paper. This novel approach is demonstrated on the
IEEE 14 bus network, illustrating the difference in charges
when considering contingencies, therefore, providing
correct forward-looking economic signals to potential
network users. In turn, this will help them make informed
decisions as to whether to invest in reactive power
compensation assets or pay the network operators for
reactive power provision. Most importantly, this new
approach outperforms the currently used power factor (pf)
penalty.
Index Terms: Base LRIC-voltage network charges, CF LRICvoltage network charges, lower Nodal Voltage Limit, Upper
Nodal Voltage Limit, Contingency Factors, Spare nodal voltage
capacity and VAr compensation assets.
Stress Analysis of Precast Prestressed Concrete Beams during LiftingIJMER
The use of long span prestressed beams in bridge construction is very common. Even if the
sections are economical the erection of the beam still poses a challenge in construction. Not much work
has been done in the analysis of stress and deflection at erection stage. This paper deals with the
behavior of precast prestressed beams during lifting. Since the spans of these beams are large, it may
fail due to cracking during erection. In this paper a detailed 3-dimensional Finite Element Analysis of 2
prestressed beam sections was done with incorporating the effect of initial imperfections and prestress.
Results were obtained for both prestressed beam and non-prestressed beam and were compared with
Moen’s formulae. To include the effect of prestressing cables in the beam new additional formulae were
introduced and used in combination with the Moen’s. The results obtained were approximately validated
with the Finite Element Analysis results. It is seen that the prestressing cables have a significant effect
on the behavior of a beam during lifting. For a prestressed beam the overhang length should be kept
minimum for safe erection which is opposite in the case of a normal beam.
Multistage interconnection networks (MIN) are among the most efficient switching architectures for the number of switching Element (SE). Optical crosstalk in optical multistage interconnection network on the omega network topology Switches are arranged in multiple stages. These switches also referred to as switching element (SEs) have two input and two output ports, interconnected to the neighboring stages in a shuffle exchange connected pattern message routing in such a network is determined by the interstate connection pattern.
Optical Multistage interconnection networks (OMIN) are advanced version of MINs. The main Problem with OMIN is crosstalk. The main purpose of this paper is to Present crosstalk free modified omega network, which is based on time domain approach. This paper presents the source and destination based Algorithm (SDBA) .SDBA does the scheduling for source and their respective destination addresses for the message routing. SDBA is compared with the crosstalk modified omega network (CFMON).CFMON also Minimizes the crosstalk. This paper is the modified form of the omega network.
Integrating Environmental Accounting in Agro-Allied and Manufacturing Industr...IJMER
‘ONLY WHEN THE LAST TREE IS CUT, ONLY WHEN THE LAST RIVER IS
POLLUTED, ONLY WHEN THE LAST FISH IS CAUGHT, ONLY THEN WILL THEY REALIZE
THAT YOU CANNOT EAT MONEY’ American proverb
Due to growing awareness and concern on the impact of human activity on the ecosystem, there is an
increasing trend to judge organizations in relation to the community in which it operates. The
impact of the activities on the environment with regard to pollution of water, air, land and abuse of
natural resources are coming under scrutiny of governments, stakeholders and citizens. Education is
considered the key to effective development strategies and TVET institutions then must be the master
key that can alleviate poverty, promote peace, conserve the environment, improve the quality of life
for all and help achieve sustainable development. Unless proper accounting work is done, it cannot
be determined that both have been fulfilling their responsibilities. The aim of the study was to explore
whether distinctive processes of environmental accounting are possible in agro-allied and
manufacturing industries with a view to enhancing sustainability. To accomplish this aim, this
research explores environmental accountability practices in TVET institutions. This paper is in part
of an exploratory research project and it is limited in that it attempts to be illuminative and
theoretically driven. The paper aims to prove that environmental reporting and disclosure will
enable in agro-allied and manufacturing industries undertake a major transformation that includes
approaches that harmonize economic prosperity, environmental conservation and social well-being.
However, while strategies for achieving this goal are not widespread, a range of international
experiences is beginning to suggest ways forward. These initiatives include national TVET policy
reforms, green campus, green curriculum, green community, green research and green culture. The
paper includes suggested templates that can be useful in agro-allied and manufacturing industries
Repairing of Concrete by Using Polymer-Mortar CompositesIJMER
Replacement of concrete buildings, bridges, roadways and other structures is becoming
more and more expensive as costs of materials and labor continue their upward spiral. Polymermodified or polymer cement mortar (PCM) and concrete (PCC) are a category of concrete-polymer composites which are made from cement mortar or concrete with polymers, The main application of polymer cements is in concrete repair. In this research two sets of mixtures were prepared that consist
of mortar and polymer to fabricate the polymer-cement composite. The first set include mortar with
ratio (1:1) (cement-sand) without water, while the other set include mortar with ratio (1:2) (cementsand)
without water. The polymer was Quickmast105 epoxy which is added to the mortar after mixing
the resin with the hardener in proportion of (1:3). Each set was consist of different percentage of polymer (50:50, 40:60 and 30:70). Tests were conducted, including compression, flexural and bonding strength, were several results obtained including, the highest compressive strength was about 102.889MPa and the highest value of flexural strength was about 57.648MPa for (1:1), the polymermortar with 40:60 ratio showed a higher bonding compressive strength. Proportionality between the
cement and sand and also between the polymer and mortar plays a major role in adhesion and strength are considered key factors in the bonding and portability to repairs.
A Novel Computing Paradigm for Data Protection in Cloud ComputingIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Prospective Evaluation of Intra operative Nucleus 22-channel cochlear implant...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Stability of Simply Supported Square Plate with Concentric CutoutIJMER
The finite element method is used to obtain the elastic buckling loads for simply supported isotropic square plate containing circular, square and rectangular cutouts. ANSYS finite element software had been used in the study. The applied inplane loads considered are uniaxial and biaxial compressions. In all the cases the load is distributed uniformly along the plate outer edges. The effects of the size and shape of concentric cutouts with different plate thickness ratios and having all-round simply supported boundary condition on the plate buckling strength have been considered in the analysis. It is found that cutouts have considerable influence on the buckling load factor k and the effect is larger when cutout ratios greater than 0.3 and for thickness ratio greater than 0.15.
Experimental Investigation of Performance & Emission Characteristics of Diese...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Finite Element Analysis of Obround Pressure VesselsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
An Efficient System for Traffic Control in Networks Using Virtual Routing Top...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Damping Of Composite Material Structures with Riveted JointsIJMER
Vibration and noise reduction are crucial in maintaining high performance level and
prolonging the useful life of machinery, automobiles, aerodynamic and spacecraft structures. It is
observed that damping in materials occur due to energy release due to micro-slips along frictional
interfaces and due to varying strain regions and interaction between the metals. But it was found
that the damping effect in metals is quite small that it can be neglected. Damping in metals is due to
the micro-slips along frictional interfaces. Composites, however, have better damping properties
than structural metals and cannot be neglected. Typically, the range of composite damping begins
where the best damped metal stops.In the present work, theoretical analysis was done on various
polymer matrix composite (glass fibre polyesters) with riveted joints by varying initial conditions.
Strain energy loss was calculated to calculate the damping in composites. Using FEA model, load
variation w.r.t time was observed and the strain energy loss calculated was utilised in finding the
material damping for Carbon fibre epoxy with riveted joints. Various simulations were performed in
ANSYS and these results were utilised to calculate the loss factor, Rayleigh‘s damping constants
and logarithmic decrement.
Turbo codes are error-correcting codes with performance that is close to the
Shannon theoretical limit (SHA). The motivation for using turbo codes is
that the codes are an appealing mix of a random appearance on the channel
and a physically realizable decoding structure. The communication systems
have the problem of latency, fast switching, and reliable data transfer. The
objective of the research paper is to design and turbo encoder and decoder
hardware chip and analyze its performance. Two convolutional codes are
concatenated concurrently and detached by an interleaver or permuter in the
turbo encoder. The expected data from the channel is interpreted iteratively
using the two related decoders. The soft (probabilistic) data about an
individual bit of the decoded structure is passed in each cycle from one
elementary decoder to the next, and this information is updated regularly.
The performance of the chip is also verified using the maximum a posteriori
(MAP) method in the decoder chip. The performance of field-programmable
gate array (FPGA) hardware is evaluated using hardware and timing
parameters extracted from Xilinx ISE 14.7. The parallel concatenation offers
a better global rate for the same component code performance, and reduced
delay, low hardware complexity, and higher frequency support.
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMVLSICS Design
Low Power is an extremely important issue for future mobile communication systems; The focus of this paper is to implementat turbo codes for low power solutions. The effect on performance due to variation in parameters like frame length, number of iterations, type of encoding scheme and type of the interleave in the presence of additive white Gaussian noise is studied with the floating point model. In order to obtain the effect of quantization and word length variation, a fixed point model of the application is also developed.. The application performance measure, namely bit-error rate (BER) is used as a design constraint while optimizing for power and area coverage. Low power Optimization is Performed on Implementation levels by the use of Voltage scaling. With those Techniques we can reduced the power 98.5%and Area(LUT) is 57% and speed grade is Increased .This type of Power maneger is proposed and implemented based on the timing details of the turbo decoder in the VHDL model.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Reed Solomon Coding For Error Detection and Correctioninventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Space time block coding is a technique used in wireless communication to transmit multiple copies of a data stream across a number of antennas and to exploit the various received versions of the data to improve the reliability of data transfer. The fact that the transmitted signal must traverse a potentially difficult environment with scattering, reflection, refraction and so on and may then be further corrupted by thermal noise in the receiver means that some of the received copies of the data may be closer to the original signal than others. This redundancy results in a higher chance of being able to use one or more of the received copies to correctly decode the received signal. In fact, space–time coding combines all the copies of the received signal in an optimal way to extract as much information from each of them as possible.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Coverage of WCDMA Network Using Different Modulation Techniques with Soft and...ijcnac
The wideband code division multiple access (WCDMA) based 3G cellular mobile
wireless networks are expected to provide a diverse range of multimedia services to
mobile users with guaranteed quality of service (QoS). To serve diverse quality of service
requirements of these networks it necessitates new radio resource management strategies
for effective utilization of network resources with coding schemes. In this paper coverage
area for voice traffic and with different modulation techniques, coding schemes and
decision decoder are discussed. These discussions are to improve the coverage area in
the mobile communication system. This paper is mainly focuses on coverage area of
WCDMA system using link budget calculation with different modulation, coding schemes
and decision decoder. Simulation results demonstrate coverage extension for voice
service with different modulation,coding scheme, soft and hard decision decoder using
appropriate Bit error rate (BER) to maintain QoS of the voice.
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and
performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar
codes in series to create robust error-correcting codes. The primary objective here is to optimize the
behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving,
a technique that rearranges bits to maximize the separation between originally neighboring symbols. This
rearrangement is instrumental in converting error clusters into distributed errors across the entire
sequence. To evaluate their performance, we proposed to model a communication system with seven
components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel
decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different
block lengths and code rates. Next, we compare the bit error rate (BER) performance between our
proposed method and polar codes.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.
Similar to The Reliability in Decoding of Turbo Codes for Wireless Communications (20)
A Study on Translucent Concrete Product and Its Properties by Using Optical F...IJMER
- Translucent concrete is a concrete based material with light-transferring properties,
obtained due to embedded light optical elements like Optical fibers used in concrete. Light is conducted
through the concrete from one end to the other. This results into a certain light pattern on the other
surface, depending on the fiber structure. Optical fibers transmit light so effectively that there is
virtually no loss of light conducted through the fibers. This paper deals with the modeling of such
translucent or transparent concrete blocks and panel and their usage and also the advantages it brings
in the field. The main purpose is to use sunlight as a light source to reduce the power consumption of
illumination and to use the optical fiber to sense the stress of structures and also use this concrete as an
architectural purpose of the building
Developing Cost Effective Automation for Cotton Seed DelintingIJMER
A low cost automation system for removal of lint from cottonseed is to be designed and
developed. The setup consists of stainless steel drum with stirrer in which cottonseeds having lint is mixed
with concentrated sulphuric acid. So lint will get burn. This lint free cottonseed treated with lime water to
neutralize acidic nature. After water washing this cottonseeds are used for agriculter purpose
Study & Testing Of Bio-Composite Material Based On Munja FibreIJMER
The incorporation of natural fibres such as munja fiber composites has gained
increasing applications both in many areas of Engineering and Technology. The aim of this study is to
evaluate mechanical properties such as flexural and tensile properties of reinforced epoxy composites.
This is mainly due to their applicable benefits as they are light weight and offer low cost compared to
synthetic fibre composites. Munja fibres recently have been a substitute material in many weight-critical
applications in areas such as aerospace, automotive and other high demanding industrial sectors. In
this study, natural munja fibre composites and munja/fibreglass hybrid composites were fabricated by a
combination of hand lay-up and cold-press methods. A new variety in munja fibre is the present work
the main aim of the work is to extract the neat fibre and is characterized for its flexural characteristics.
The composites are fabricated by reinforcing untreated and treated fibre and are tested for their
mechanical, properties strictly as per ASTM procedures.
Hybrid Engine (Stirling Engine + IC Engine + Electric Motor)IJMER
Hybrid engine is a combination of Stirling engine, IC engine and Electric motor. All these 3 are
connected together to a single shaft. The power source of the Stirling engine will be a Solar Panel. The aim of
this is to run the automobile using a Hybrid engine
Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...IJMER
The present day technology demands eco-friendly developments. In this era the
composite material are playing a vital roal in different field of Engineering .The composite materials
are using as a principle materials. Nowaday the composite materials are utilizing as a important
component of engineering field .Where as the importance of the applications of composites is well
known, but thrust on the use of natural fibres in it for reinforcement has been given priority for some
times. But changing from synthetic fibres to natural fibres provides only half green-composites. A
partial green composite will be achieved if the matrix component is also eco-friendly. Keeping this in
view, a detailed literature surveyed has been carried out through various issues of the Journals
related to this field. The material systems used are sunnhemp fibres. Some epoxy and hardener has
been also added for stability and drying of the bio-composites. Various graphs and bar-charts are
super-imposed on each other for comparison among themselves and Graphs is plotted on MAT LAB
and ORIGIN 6.0 software. To determining tensile strengths, Various properties for different biocomposites
have been compared among themselves. Comparison of the behaviour of bio-composites of
this work has been also compare with other works. The bio-composites developed in this work are
likely to get applications in fall ceilings, partitions, bio-degradable packagings, automotive interiors,
sports things (e.g. rackets, nets, etc.), toys etc.
Geochemistry and Genesis of Kammatturu Iron Ores of Devagiri Formation, Sandu...IJMER
The Greenstone belts of Karnataka are enriched in BIFs in Dharwar craton, where Iron
formations are confined to the basin shelf, clearly separated from the deeper-water iron formation that
accumulated at the basin margin and flanking the marine basin. Geochemical data procured in terms of
major, trace and REE are plotted in various diagrams to interpret the genesis of BIFs. Al2O3, Fe2O3 (T),
TiO2, CaO, and SiO2 abundances and ratios show a wide variation. Ni, Co, Zr, Sc, V, Rb, Sr, U, Th,
ΣREE, La, Ce and Eu anomalies and their binary relationships indicate that wherever the terrigenous
component has increased, the concentration of elements of felsic such as Zr and Hf has gone up. Elevated
concentrations of Ni, Co and Sc are contributed by chlorite and other components characteristic of basic
volcanic debris. The data suggest that these formations were generated by chemical and clastic
sedimentary processes on a shallow shelf. During transgression, chemical precipitation took place at the
sediment-water interface, whereas at the time of regression. Iron ore formed with sedimentary structures
and textures in Kammatturu area, in a setting where the water column was oxygenated.
Experimental Investigation on Characteristic Study of the Carbon Steel C45 in...IJMER
In this paper, the mechanical characteristics of C45 medium carbon steel are investigated
under various working conditions. The main characteristic to be studied on this paper is impact toughness
of the material with different configurations and the experiment were carried out on charpy impact testing
equipment. This study reveals the ability of the material to absorb energy up to failure for various
specimen configurations under different heat treated conditions and the corresponding results were
compared with the analysis outcome
Non linear analysis of Robot Gun Support Structure using Equivalent Dynamic A...IJMER
Robot guns are being increasingly employed in automotive manufacturing to replace
risky jobs and also to increase productivity. Using a single robot for a single operation proves to be
expensive. Hence for cost optimization, multiple guns are mounted on a single robot and multiple
operations are performed. Robot Gun structure is an efficient way in which multiple welds can be done
simultaneously. However mounting several weld guns on a single structure induces a variety of
dynamic loads, especially during movement of the robot arm as it maneuvers to reach the weld
locations. The primary idea employed in this paper, is to model those dynamic loads as equivalent G
force loads in FEA. This approach will be on the conservative side, and will be saving time and
subsequently cost efficient. The approach of the paper is towards creating a standard operating
procedure when it comes to analysis of such structures, with emphasis on deploying various technical
aspects of FEA such as Non Linear Geometry, Multipoint Constraint Contact Algorithm, Multizone
meshing .
Static Analysis of Go-Kart Chassis by Analytical and Solid Works SimulationIJMER
This paper aims to do modelling, simulation and performing the static analysis of a go
kart chassis consisting of Circular beams. Modelling, simulations and analysis are performed using 3-D
modelling software i.e. Solid Works and ANSYS according to the rulebook provided by Indian Society of
New Era Engineers (ISNEE) for National Go Kart Championship (NGKC-14).The maximum deflection is
determined by performing static analysis. Computed results are then compared to analytical calculation,
where it is found that the location of maximum deflection agrees well with theoretical approximation but
varies on magnitude aspect.
In récent year various vehicle introduced in market but due to limitation in
carbon émission and BS Séries limitd speed availability vehicle in the market and causing of
environnent pollution over few year There is need to decrease dependancy on fuel vehicle.
bicycle is to be modified for optional in the future To implement new technique using change in
pedal assembly and variable speed gearbox such as planetary gear optimise speed of vehicle
with variable speed ratio.To increase the efficiency of bicycle for confortable drive and to
reduce torque appli éd on bicycle. we introduced epicyclic gear box in which transmission done
throgh Chain Drive (i.e. Sprocket )to rear wheel with help of Epicyclical gear Box to give
number of différent Speed during driving.To reduce torque requirent in the cycle with change in
the pedal mechanism
Integration of Struts & Spring & Hibernate for Enterprise ApplicationsIJMER
The proposal of this paper is to present Spring Framework which is widely used in
developing enterprise applications. Considering the current state where applications are developed using
the EJB model, Spring Framework assert that ordinary java beans(POJO) can be utilize with minimal
modifications. This modular framework can be used to develop the application faster and can reduce
complexity. This paper will highlight the design overview of Spring Framework along with its features that
have made the framework useful. The integration of multiple frameworks for an E-commerce system has
also been addressed in this paper. This paper also proposes structure for a website based on integration of
Spring, Hibernate and Struts Framework.
Microcontroller Based Automatic Sprinkler Irrigation SystemIJMER
Microcontroller based Automatic Sprinkler System is a new concept of using
intelligence power of embedded technology in the sprinkler irrigation work. Designed system replaces
the conventional manual work involved in sprinkler irrigation to automatic process. Using this system a
farmer is protected against adverse inhuman weather conditions, tedious work of changing over of
sprinkler water pipe lines & risk of accident due to high pressure in the water pipe line. Overall
sprinkler irrigation work is transformed in to a comfortableautomatic work. This system provides
flexibility & accuracy in respect of time set for the operation of a sprinkler water pipe lines. In present
work the author has designed and developed an automatic sprinkler irrigation system which is
controlled and monitored by a microcontroller interfaced with solenoid valves.
On some locally closed sets and spaces in Ideal Topological SpacesIJMER
In this paper we introduce and characterize some new generalized locally closed sets
known as
δ
ˆ
s-locally closed sets and spaces are known as
δ
ˆ
s-normal space and
δ
ˆ
s-connected space and
discussed some of their properties
Intrusion Detection and Forensics based on decision tree and Association rule...IJMER
This paper present an approach based on the combination of, two techniques using
decision tree and Association rule mining for Probe attack detection. This approach proves to be
better than the traditional approach of generating rules for fuzzy expert system by clustering methods.
Association rule mining for selecting the best attributes together and decision tree for identifying the
best parameters together to create the rules for fuzzy expert system. After that rules for fuzzy expert
system are generated using association rule mining and decision trees. Decision trees is generated for
dataset and to find the basic parameters for creating the membership functions of fuzzy inference
system. Membership functions are generated for the probe attack. Based on these rules we have
created the fuzzy inference system that is used as an input to neuro-fuzzy system. Fuzzy inference
system is loaded to neuro-fuzzy toolbox as an input and the final ANFIS structure is generated for
outcome of neuro-fuzzy approach. The experiments and evaluations of the proposed method were
done with NSL-KDD intrusion detection dataset. As the experimental results, the proposed approach
based on the combination of, two techniques using decision tree and Association rule mining
efficiently detected probe attacks. Experimental results shows better results for detecting intrusions as
compared to others existing methods
Natural Language Ambiguity and its Effect on Machine LearningIJMER
"Natural language processing" here refers to the use and ability of systems to process
sentences in a natural language such as English, rather than in a specialized artificial computer
language such as C++. The systems of real interest here are digital computers of the type we think of as
personal computers and mainframes. Of course humans can process natural languages, but for us the
question is whether digital computers can or ever will process natural languages. We have tried to
explore in depth and break down the types of ambiguities persistent throughout the natural languages
and provide an answer to the question “How it affects the machine translation process and thereby
machine learning as whole?” .
Today in era of software industry there is no perfect software framework available for
analysis and software development. Currently there are enormous number of software development
process exists which can be implemented to stabilize the process of developing a software system. But no
perfect system is recognized till yet which can help software developers for opting of best software
development process. This paper present the framework of skillful system combined with Likert scale. With
the help of Likert scale we define a rule based model and delegate some mass score to every process and
develop one tool name as MuxSet which will help the software developers to select an appropriate
development process that may enhance the probability of system success.
Material Parameter and Effect of Thermal Load on Functionally Graded CylindersIJMER
The present study investigates the creep in a thick-walled composite cylinders made
up of aluminum/aluminum alloy matrix and reinforced with silicon carbide particles. The distribution
of SiCp is assumed to be either uniform or decreasing linearly from the inner to the outer radius of
the cylinder. The creep behavior of the cylinder has been described by threshold stress based creep
law with a stress exponent of 5. The composite cylinders are subjected to internal pressure which is
applied gradually and steady state condition of stress is assumed. The creep parameters required to
be used in creep law, are extracted by conducting regression analysis on the available experimental
results. The mathematical models have been developed to describe steady state creep in the composite
cylinder by using von-Mises criterion. Regression analysis is used to obtain the creep parameters
required in the study. The basic equilibrium equation of the cylinder and other constitutive equations
have been solved to obtain creep stresses in the cylinder. The effect of varying particle size, particle
content and temperature on the stresses in the composite cylinder has been analyzed. The study
revealed that the stress distributions in the cylinder do not vary significantly for various combinations
of particle size, particle content and operating temperature except for slight variation observed for
varying particle content. Functionally Graded Materials (FGMs) emerged and led to the development
of superior heat resistant materials.
Energy Audit is the systematic process for finding out the energy conservation
opportunities in industrial processes. The project carried out studies on various energy conservation
measures application in areas like lighting, motors, compressors, transformer, ventilation system etc.
In this investigation, studied the technical aspects of the various measures along with its cost benefit
analysis.
Investigation found that major areas of energy conservation are-
1. Energy efficient lighting schemes.
2. Use of electronic ballast instead of copper ballast.
3. Use of wind ventilators for ventilation.
4. Use of VFD for compressor.
5. Transparent roofing sheets to reduce energy consumption.
So Energy Audit is the only perfect & analyzed way of meeting the Industrial Energy Conservation.
An Implementation of I2C Slave Interface using Verilog HDLIJMER
The focus of this paper is on implementation of Inter Integrated Circuit (I2C) protocol
following slave module for no data loss. In this paper, the principle and the operation of I2C bus protocol
will be introduced. It follows the I2C specification to provide device addressing, read/write operation and
an acknowledgement. The programmable nature of device provide users with the flexibility of configuring
the I2C slave device to any legal slave address to avoid the slave address collision on an I2C bus with
multiple slave devices. This paper demonstrates how I2C Master controller transmits and receives data to
and from the Slave with proper synchronization.
The module is designed in Verilog and simulated in ModelSim. The design is also synthesized in Xilinx
XST 14.1. This module acts as a slave for the microprocessor which can be customized for no data loss.
Discrete Model of Two Predators competing for One PreyIJMER
This paper investigates the dynamical behavior of a discrete model of one prey two
predator systems. The equilibrium points and their stability are analyzed. Time series plots are obtained
for different sets of parameter values. Also bifurcation diagrams are plotted to show dynamical behavior
of the system in selected range of growth parameter
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
The Reliability in Decoding of Turbo Codes for Wireless Communications
1. International Journal of Modern Engineering Research (IJMER)
www.ijmer.com Vol. 3, Issue. 4, Jul - Aug. 2013 pp-2226-2231 ISSN: 2249-6645
www.ijmer.com 2226 | Page
T. Krishna Kanth1
, D. Rajendra Prasad2
1
M. Tech student, Dept of ECE, St.Ann’s college of Engineering and Technology,
Chirala, AP, India
2
Assoc. Professor, Dept of ECE, St.Ann’s college of Engineering and Technology,
Chirala, AP, India
ABSTRACT: Turbo codes are one of the most powerful types of error control codes and high performance forward error
correction codes currently available. They will be used later in the thesis as powerful building blocks in our search for better
bandwidth efficient code schemes. Turbo codes emerged in 1993 and have since become a popular area of communications
research. This paper provides a description of three turbo codes algorithms. Soft-output Viterbi algorithm, logarithmic-
maximum a posteriori turbo algorithm and maximum- logarithmic-maximum a posteriori turbo decoding algorithms are the
three candidates for decoding turbo codes. Soft-input soft-output (SISO) turbo decoder based on soft-output Viterbi
algorithm (SOVA) and the logarithmic versions of the MAP algorithm, namely, Log-MAP decoding algorithm. The bit error
rate (BER) performances of these algorithms are compared.
KEYWORDS: Turbo codes, Channel coding, Iterative decoding.
I. INTRODUCTION
In information theory and coding theory with applications in computer science and telecommunication, error
detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable
communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced
during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error
correction enables reconstruction of the original data. The near Shannon limit error correction performance of Turbo codes
and parallel concatenated convolutional codes have raised a lot of interest in the research community to find practical
decoding algorithms for implementation of these codes. The demand of turbo codes for wireless communication systems has
been increasing since they were first introduced by Berrou et. al. in the early 1990‟s.Various systems such as 3GPP, HSDPA
and WiMAX have already adopted turbo codes in their standards due to their large coding gain. In it has also been shown
that turbo codes can be applied to other wireless communication systems used for satellite and deep space applications.
The MAP decoding also known as BCJR algorithm is not a practical algorithm for implementation in real systems.
The MAP algorithm is computationally complex and sensitive to SNR mismatch and inaccurate estimation of the noise
variance . MAP algorithm is not practical to implement in a chip. The logarithmic version of the MAP algorithm and the Soft
Output Viterbi Algorithm (SOVA) are the practical decoding algorithms for implementation in this system.
II. SHANNON–HARTLEY THEOREM
The field of Information Theory, of which Error Control Coding is a part, is founded upon a paper by Claude
Shannon in 1948. Shannon calculated a theoretical maximum rate at which data could be transmitted over a channel
perturbed by additive white Gaussian noise (AWGN) with an arbitrarily low bit error rate. This maximum data rate, the
capacity of the channel, was shown to be a function of the average received signal power, W, the average noise power N,
and the bandwidth of the system. This function, known as the Shannon-Hartley Capacity Theorem, can be stated as:
C=W log2 (1+S/N) bits/sec
If W is in Hz, then the capacity, C is in bits/s. Shannon stated that it is theoretically possible to transmit data over such a
channel at any rate R≤C with an arbitrarily small error probability
III. CODING IN WIRELESS COMMUNICATIONS
Coding theory is the study of the properties of codes and their fitness for a specific application and used for data
compression, error correction and more recently also for network coding. Codes are studied by various scientific disciplines,
such as information theory, electrical engineering, mathematics, and computer science for the purpose of designing efficient
and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection)
of errors in the transmitted data. Most digital communication techniques rely on error correcting coding to achieve an
acceptable performance under poor carrier to noise conditions. Basically coding in wireless communications are of two types:
III.1. Source coding: In computer science and information theory, „data compression‟, „source coding‟, or „bit-rate
reduction‟ involves encoding information using fewer bits than the original representation. Compression can be either lossy
or lossless. The lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is
lost in lossless compression. Lossy compression reduces bits by identifying unnecessary information and removing it. The
process of reducing the size of a data file is popularly referred to as data compression, although its formal name is source
coding (coding done at the source of the data before it is stored or transmitted).
The Reliability in Decoding of Turbo Codes for Wireless
Communications
2. International Journal of Modern Engineering Research (IJMER)
www.ijmer.com Vol. 3, Issue. 4, Jul - Aug. 2013 pp-2226-2231 ISSN: 2249-6645
www.ijmer.com 2227 | Page
III.2. Channel coding: The channel coding also called as forward corrections codes (FEC).The purpose of channel coding is
to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. Channel
coding is referred to the processes done in both transmitter and receiver of a digital communications system .While not
mutually exclusive, performance in these areas is a trade off. So, different codes are optimal for different applications. The
needed properties of this code mainly depend on the probability of errors happening during transmission. Channel coding is
distinguished from source coding, i.e., digitizing of analog message signals and data compression.
Types of FEC Codes:
1. Linear block codes.
2. Convolutional codes.
1. Linear Block Codes: With Block Codes a block of data has error detecting and correcting bits added to it. One of the
simplest error correcting block code is the Hamming Code, where parity bits are added to the data. By adding the error
correcting bits to data, transmission errors can be corrected. However since more data has to be squeezed into the same
channel bandwidth the more errors will occur. Linear block codes have the property of linearity, i.e. the sum of any two code
words is also a code word, and they are applied to the source bits in blocks, hence the name, linear block codes. There are
block codes that are not linear, but it is difficult to prove that a code is a good one without this property. Linear block codes
are summarized by their symbol alphabets (e.g., binary or ternary) and parameters (n, m, dmin) where n is the length of the
codeword, in symbols, m is the number of source symbols that will be used for encoding at once, dmin is the minimum
hamming distance for the code. Block codes submit k bits in their inputs and forwards n bits in their output. These codes are
frequently known as (n,k) codes. Apparently, whatever coding scheme is, it has added n-k bits to the coded block .Block
codes are used primarily to correct or detect errors in data transmission. Commonly used block codes are Reed–Solomon
codes, BCH codes, Golay codes and Hamming codes.
2. Convolutional Codes: Despite of block codes which are memory less, convolutional codes are coding algorithms with
memory. Since, their coding rate (R) is higher than its counterpart in block codes they are more frequently used coding
method in practice. Every convolutional code uses m units of memory, therefore a convolutional code represents with
(n,k,m).In Convolutional coding the input bits are passed through a shift register of length K.N output bits are generated by
modulo 2 adding selected bits held in different stages of the shift register. For each new data bit N output bits are produced.
The output bits are influenced by K data bits, so that the information is spread in time. The channel code is used to protect
data sent over it for storage or retrieval even in the presence of noise (errors).In practical communication systems,
convolutional codes tend to be one of the more widely used channel codes. These codes are used primarily for real-time error
correction and can convert an entire data stream into one single codeword. The Viterbi algorithm provided the basis for the
main decoding strategy of convolutional codes. The encoded bits depend not only on the current informational k input bits
but also on past input bits.
IV. TURBO CODES
Turbo codes are one of the most powerful types of error control codes (ECC) currently available and a class of high
performance forward error correction (FEC) codes. They will be used later in the thesis as powerful building blocks in our
search for better bandwidth efficient code schemes. Turbo codes emerged in 1993 and have since become a popular area of
communications research. It is a combination of both block and convolutional codes. The encoder for a turbo code consists
of two convolutional codes in parallel, with their inputs separated by a pseudo-random interleaver. The decoder consists of
two Maximum A Posteriori (MAP) decoders connected in series via interleavers, with a feedback loop from the output of the
second to the input of the first.
IV.1. Turbo Codes Encoding: The encoder for a turbo code is a parallel concatenated convolutional code. Figure 1 shows a
block diagram of the encoder first presented by Berrou et al. The input sequence is passed into the input of a convolutional
encoder, and a coded bit stream is generated. The data sequence is then interleaved. That is, the bits are loaded into a matrix
and read out in a way so as to spread the positions of the input bits. The bits are often read out in a pseudo-random manner.
The interleaved data sequence is passed to a second convolutional encoder, and a second coded bit stream is generated. The
code sequence that is passed to the modulator for transmission is a multiplexed (and possibly punctured) stream consisting of
systematic code bits and parity bits from both the first encoder and the second encoder.
fig 1: Structure of Turbo Encoder
3. International Journal of Modern Engineering Research (IJMER)
www.ijmer.com Vol. 3, Issue. 4, Jul - Aug. 2013 pp-2226-2231 ISSN: 2249-6645
www.ijmer.com 2228 | Page
Interleaving: It is a device for reordering a sequence of bits or symbols. A familiar role of interleavers in
communications is that of the symbol interleaver which is used after error control coding and signal mapping to ensure that
fading bursts affecting blocks of symbols transmitted over the channel are broken up at the receiver by a de-interleaver, prior
to decoding.
B. Turbo codes Decoding
fig 2. Iterative Turbo Decoding
In a typical turbo decoding system (see Fig. 2), two decoders operate iteratively and pass their decisions to each
other after each iteration. These decoders should produce soft-outputs to improve the decoding performance. Such a decoder
is called a Soft-Input Soft- Output (SISO) decoder . Each decoder operates not only on its own input but also on the other
decoder‟s incompletely decoded output which resembles the operation principle of turbo engines. This analogy between
the operation of the turbo decoder and the turbo engine gives
This coding technique its name, “turbo codes” .Turbo decoding process can be explained as follows: Encoded
information sequence Xk is transmitted over an Additive White Gaussian Noise (AWGN) channel, and a noisy received
sequence Yk is obtained. Each decoder calculates the Log-Likelihood Ratio (LLR) for the k-th data bit dk, as
L (dk) = log [P (dk=1|y / P (dk=0|y)] (1)
LLR can be decomposed into 3 independent terms, as
L (dk) =Lapri (dk) +Lc (dk) +Le (dk) (2)
Where Lapri (dk) is the a-priori information of (dk), Lc (dk) is the channel measurement, and Le (dk) is the extrinsic
information exchanged between the constituent decoders. Extrinsic information from one decoder becomes the a-priori
information for the other decoder at the next decoding stage. Le12 and Le21 in Figure 1 represent the extrinsic information
from decoder1 to decoder2 and decoder2 to decoder1 respectively.
LLR computations can be performed by using one of the two main turbo decoding algorithms SOVA and MAP
algorithms. The MAP algorithm seeks for the most likely data sequence whereas SOVA, which is a modified version of the
Viterbi algorithm, seeks for the most likely connected
path through the encoder trellis. The MAP algorithm is a more complex algorithm compared to SOVA. At high
SNR, the performance of SOVA and MAP are almost the same. However, at low Signal-to-Noise Ratios (SNRs) MAP
algorithm is superior to SOVA by 0.5 dB or more. The following sections explain the MAP algorithm and its simplified
versions Log-MAP and Max-Log-MAP algorithms.
V. DECODING ALGORITHMS TURBO CODES
We review now the decoding algorithms used within DEC1 and DEC2 to implement the soft input, soft-output
processing needed for iterative decoding. We begin with the Maximum A Posteriori (MAP), algorithm. Decoding of
convolutional codes is most frequently achieved using the Viterbi algorithm, which makes use of a decoding trellis to record
the estimated states of the encoder at a set of time instants. The Viterbi algorithm works by rejecting the least likely path
through the trellis at each node, and keeping the most likely one. The removal of unlikely paths leaves us, usually, with a
single source path further back in the trellis. This path selection represents a „hard‟ decision; on the transmitted sequence.
The Viterbi decoder estimates a maximum likelihood sequence. Making hard decisions in this way, at an early point
in the decoding process, represents a loss of valuable information. It is frequently advantageous to retain finely-graded
probabilities, „soft decisions‟, until all possible information has been extracted from the received signal values. The turbo
decoding relies on passing information about individual transmitted bits from one decoding stage to the next. The
interleaving of the received information sequence between decoders limits the usefulness of estimating maximum likelihood
sequences. So, an algorithm is required that can output soft-decision maximum likelihood estimates on a bit-by-bit basis. The
decoder should also be able to accept soft decision inputs from the previous iteration of the decoding process. Such a
decoder is termed a Soft Input-Soft Output (SISO). Berrou and Glavieux used two such decoders in each stage of their turbo
decoder. They implemented the decoders using a modified version of an SISO algorithm proposed by Bahl, Cocke, Jelinek
and Raviv [31]. Their modified Bahl algorithm is commonly referred to as the Maximum A Posteriori or MAP algorithm,
and achieves soft decision decoding on a bit-by-bit basis by making two passes of a decoding trellis, as opposed to one in the
case of the Viterbi algorithm. The MAP algorithm is an optimal but computationally complex SISO algorithm. The Log-
MAP and Max-Log-MAP algorithms are simplified versions of the MAP algorithm. MAP algorithm calculates LLRs for
each information bit as
4. International Journal of Modern Engineering Research (IJMER)
www.ijmer.com Vol. 3, Issue. 4, Jul - Aug. 2013 pp-2226-2231 ISSN: 2249-6645
www.ijmer.com 2229 | Page
(3)
Where α is the forward state metric, β is the backward state metric,γ is the branch metric, and k S is the trellis state
at trellis time k . Forward state metrics are calculated by a forward recursion from trellis time k = 1 to, k = N where N is the
number of information bits in one data frame. Recursive
calculation of forward state metrics is performed as
αk (Sk) = 1
∑j=0 αk-1(Sk-1)γj(Sk-1.Sk) (4)
Similarly, the backward state metrics are calculated by a backward recursion from trellis time k = N to, k = 1 as
βk (Sk) = 1
∑j=0 βk-1(Sk+1)γj(Sk.Sk+1) (5)
Branch metrics are calculated for each possible trellis transition as
(6)
Where i = (0,1) , k A is a constant, xs
k and x p
k are the encoded systematic data bit and parity bit, and, y s
k and yp
k are the
received noisy systematic data bit and parity bit respectively.
LOG-MAP ALGORITHM: To avoid complex mathematical calculations of MAP decoding, computations can be performed
in the logarithmic domain. Furthermore, logarithm and exponential computations can be eliminated by the following
approximation.
So equations (3)-(6) become
Where K is a constant.
The Log-MAP parameters are very close approximations of the MAP parameters and therefore, the Log-MAP BER
performance is close to that of the MAP algorithm.
MAX-LOG-MAP ALGORITHM: The correction function f c= log(1+ e|y−x|
) in the * max (.) operation can be implemented
in different ways. The Max-log-MAP algorithm simply neglects the correction term and approximates the * max (.) operator
as at the expense of some performance degradation. This simplification eliminates the need for an LUT required to find the
corresponding correction factor in the * max (.) operation.
VI. PRINCIPLES OF ITERATIVE DECODING
In a typical communications receiver, a demodulator is often designed to produce soft decisions, which are then
transferred to a decoder. The improvement in error performance of systems utilizing such soft decisions is typically
approximated as 2 dB, as compared to hard decisions in AWGN. Such a decoder could be called a soft input/ hard output
decoder, because the final decoding process out of the decoder must terminate in bits (hard decisions). With turbo codes,
where two or more component codes are used, and decoding involves feeding outputs from one decoder to the inputs of
other decoders in an iterative fashion, a hard-output decoder would not be suitable. That is because a hard decision into a
decoder degrades system performance (compared to soft decisions).
Hence, what is needed for the decoding of turbo codes is a soft input/ soft output decoder. For the first decoding
iteration of such a soft input/soft output decoder, we generally assume the binary data to be equally likely, yielding an initial
a priori LLR value of L(d)=0. The channel LLR value, Lc(x), is measured by forming the logarithm of the ratio of the values.
The output L(d) of the decoder in Figure 3 is made up of the LLR from the detector, L‟(d) , and the extrinsic LLR output,
Le(d) ,representing knowledge gleaned from the decoding process. As illustrated in Figure 3, for iterative decoding, the
extrinsic likelihood is fed back to the decoder input, to serve as a refinement of the a priori probability of the data for the
next iteration.
5. International Journal of Modern Engineering Research (IJMER)
www.ijmer.com Vol. 3, Issue. 4, Jul - Aug. 2013 pp-2226-2231 ISSN: 2249-6645
www.ijmer.com 2230 | Page
fig3:soft input soft output decoder
VII. SIMULATION RESULTS
The simulation curves presented shows the influence of iteration number, Block length, code rate and code
generator. Rate ½ codes are obtained from their rate 1/3 counterparts by alternately puncturing the parity bits of the
constituent encoders. In figures (4-5) BER for SOVA and LOG MAP as a function of Eb/No curves are shown for
constituent codes of constraint length three and code rate ½. Eight decoding iterations were performed for Block length of
1024 . Also the improvement achieved when the block length is increased from 1024 to 4096 for both algorithms. For figure
6, LOG MAP shows better performance than SOVA for constraint length of three and for block length of 1024.And from the
figure 7,we can observe the BER performances of LOG MAP and MAX-LOG MAP algorithms. The MAX-LOG MAP
algorithm gives better BER performance.
fig4: Iterations performed by sova decoding algorithm
fig5: Iterations performed by log-map decoding algorithm
fig6:BER performances by SOVA and Log-Map decoding algorithms
6. International Journal of Modern Engineering Research (IJMER)
www.ijmer.com Vol. 3, Issue. 4, Jul - Aug. 2013 pp-2226-2231 ISSN: 2249-6645
www.ijmer.com 2231 | Page
fig7:BER performances by Max Log-Map and Log-Map decoding algorithm
VIII. CONCLUSION
Our Simulation results shows that the decoding algorithms of Max-Log MAP performs better in terms of block
length compared to SOVA and Log-MAP, and thus it is more suitable for wireless communication.
REFERENCES
[1]. Grace Oletu, Predrag Rapajic,” The Performance of Turbo codes for Wireless Communication Systems,IEEE2011.
[2]. C. Berrou, A. Glavieux, and P. Thitimajshima, "Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes,
“Proceeding of IEEE ICC 93, pp. 1064-1070.
[3]. S. Benedetto, G. Montorsi, “Design of Parallel Concatenation Convolutional Codes: IEEE Trans. on communication, vol.
44,No.5, May 1996.
[4]. C. Berrou, “The Ten-Year-Old Turbo Codes are Entering into Service,” IEEE Commun. Mag. vol. 41, no. 8, pp.110-116, Aug
2003.
[5]. L. Bahi, J. Cocke, F. Jelinek, and J. Raviv, "Optimum decoding of linear codes for minimizing symbol error rate," IEEE
Trans.on Inf. Theory, vol. IT-20, pp. 284-287, Mar. 1974.
[6]. T.A. Summers and S.G. Wilson, "SNR Mismatch and Online Estimation in Turbo Decoding, "IEEE Trans. On Comm. vol.46,
no.4, pp. 421-424, April 1998.
[7]. P. Robertson, P. Hoeher, and E. Villebrun, "Optimal and Sub-Optimal Maximum A Posteriori Algorithms Suitable for Turbo
Decoding,“European Trans. on Telecomm. vol. 8, no. 2, pp. 119-126, March-April 1997.
[8]. P. Robertson, E. Villebrun, and P. Hoeher, "A Comparison of Optimal and Sub-optimal MAP Decoding Algorithms Operating in
the Log Domain,” International Conference on Communications, pp.1009-1013, June 1995.
[9]. S. Benedetto, G. Montorsi, D. Divsalr, and F. Pollara "Soft- Output Decoding Algorithm in Iterative Decoding of Turbo Codes,"
TDA Progress Report 42-124, pp. 63-87, February 15, 1996.
[10]. J. Hagenauer, and P. Hoeher, "A Viterbi Algorithm with Soft-Decision Outputs and Its applications, "Proc. Of GLOBECOM,
pp.1680-1686, November 1989.
[11]. J. Hagenauer, Source-Controlled Channel Decoding, "IEEE Transaction on Communications, vol. 43, No. 9, pp.2449-2457,
September 1995.
[12]. J.Hagenauer E.Offer, and L.Papke,”Iterative Decoding of Binary Block and Convolutional Codes,” IEEE Trans.Inform. Theory,
42:429-45, March 1996.
[13]. W.J. Gross and P.G. Gulak, "Simplified MAP algorithm suitables for implementation of turbo decoders" Electronic Letters, vol.
34, no.16, August 6, 1998.
[14]. J. Hagenauer, L. Papke, “Decoding Turbo Codes With the Soft Output Viterbi Algorithm (SOVA),” in Proc. Int. Symp.On
Information Theory, p164, Norway, June 1994.
[15]. J. Hagenauer, P. Robertson, L. Papke, “Iterative Decoding of Systematic Convolutional Codes With the MAP and SOVA
Algorithms,” ITG Conf., Frankfurt, Germany,pp 1-9, Oct. 1994
[16]. J. Hagenauer, E. Offer, L. Papke, “Iterative Decoding of Bloc and Convolutional Codes,” IEEE Trans. Infor. Theory, Vol. IT.
42, No.2, pp 429-445, March 1996.