Luby Transform (LT) code is considered as an efficient erasure fountain code. The construction of the coded symbols is based on the formation of the degree distribution which played a significant role in ensuring a smooth decoding process. In this paper, we propose a new encoding scheme for LT code generation. This encoding presents a deterministic degree generation (DDG) with time hoping pattern which found to be suitable for the case of short data length where the well-known Robust Soliton Distribution (RSD) witnessed a severe performance degradation. It is shown via computer simulations that the proposed (DDG) has the lowest records for unrecovered data packets when compared to that using random degree distribution like RSD and non-uniform data selection (NUDS). The success interpreted in decreasing the overhead required for data recovery to the order of 25% for a data length of 32 packets.
For wireless communication, the demand for high speed, low power and low cost Viterbi decoding are always required. Convolutional coding with Viterbi decoding is a very powerful method for forward error correction and detection method. It has been popularly used in many wireless communication systems to improve the limited capacity of its communication channels. VLSI technology in advance is using low power, less area and high speed constraints is often used for encoding and decoding of data.
EXTRACTIVE SUMMARIZATION WITH VERY DEEP PRETRAINED LANGUAGE MODELijaia
Recent development of generative pretrained language models has been proven very successful on a wide
range of NLP tasks, such as text classification, question answering, textual entailment and so on. In this
work, we present a two-phase encoder decoder architecture based on Bidirectional Encoding
Representation from Transformers(BERT) for extractive summarization task. We evaluated our model by
both automatic metrics and human annotators, and demonstrated that the architecture achieves the stateof-the-art comparable result on large scale corpus – ‘CNN/Daily Mail1
As the best of our knowledge’, this
is the first work that applies BERT based architecture to a text summarization task and achieved the stateof-the-art comparable result.
Features of genetic algorithm for plain text encryption IJECEIAES
The data communication has been growing in present day. Therefore, the data encryption became very essential in secured data transmission and storage and protecting data contents from intruder and unauthorized persons. In this paper, a fast technique for text encryption depending on genetic algorithm is presented. The encryption approach is achieved by the genetic operators Crossover and mutation. The encryption proposal technique based on dividing the plain text characters into pairs, and applying the crossover operation between them, followed by the mutation operation to get the encrypted text. The experimental results show that the proposal provides an important improvement in encryption rate with comparatively high-speed processing.
Hiding text in speech signal using K-means, LSB techniques and chaotic maps IJECEIAES
In this paper, a new technique that hides a secret text inside a speech signal without any apparent noise is presented. The technique for encoding the secret text is through first scrambling the text using Chaotic Map, then encoding the scraped text using the Zaslavsky map, and finally hiding the text by breaking the speech signal into blocks and using only half of each block with the LSB, K-means algorithms. The measures (SNR, PSNR, Correlation, SSIM, and MSE) are used on various speech files (“.WAV”), and various secret texts. We observed that the suggested technique offers high security (SNR, PSNR, Correlation, and SSIM) of an encrypted text with low error (MSE). This indicates that the noise level in the speech signal is very low and the speech purity is high, so the suggested method is effective for embedding encrypted text into speech files.
DENIAL OF SERVICE LOG ANALYSIS USING DENSITY K-MEANS METHODArdymulya Iswardani
Denial of service attacks launched by flooding the data on the ftp server causes the server to be unable to
handle requests from legitimate users, one of the techniques in detecting these attacks is by monitoring, but
found several problems including the difficulty in distinguishing the attack and with normal data traffic. So
that the necessary field studies of triage forensics to get a vital information at the scene that is useful in
supporting the overall digital forensics investigation. Triage forensics begins with the log databases which
are then performed by using the grouping density k-means algorithm to produce three levels of danger
(low, medium and high).
Proposed density k-means algorithm using three groups that represent the level of danger. The minimum
value, medium, and maximum of the dataset as early centroid, the data which has minimum distance to the
centroid value specified will join to form a cluster centroid. Data that has been joined in a cluster and then
evaluated the level of density (density) with its center (centroid) using Davies-Bouldin index.
Results of clustering that has been done in the dataset resulted in three clusters, but the level of danger
which successfully identified only two, namely the level of danger of medium and high, the value of DBI
obtained 0.082, indicates that the data used manifold homogeneous, results DBI obtained is also influenced
by the selection of the value of the centroid beginning clustering process
For wireless communication, the demand for high speed, low power and low cost Viterbi decoding are always required. Convolutional coding with Viterbi decoding is a very powerful method for forward error correction and detection method. It has been popularly used in many wireless communication systems to improve the limited capacity of its communication channels. VLSI technology in advance is using low power, less area and high speed constraints is often used for encoding and decoding of data.
EXTRACTIVE SUMMARIZATION WITH VERY DEEP PRETRAINED LANGUAGE MODELijaia
Recent development of generative pretrained language models has been proven very successful on a wide
range of NLP tasks, such as text classification, question answering, textual entailment and so on. In this
work, we present a two-phase encoder decoder architecture based on Bidirectional Encoding
Representation from Transformers(BERT) for extractive summarization task. We evaluated our model by
both automatic metrics and human annotators, and demonstrated that the architecture achieves the stateof-the-art comparable result on large scale corpus – ‘CNN/Daily Mail1
As the best of our knowledge’, this
is the first work that applies BERT based architecture to a text summarization task and achieved the stateof-the-art comparable result.
Features of genetic algorithm for plain text encryption IJECEIAES
The data communication has been growing in present day. Therefore, the data encryption became very essential in secured data transmission and storage and protecting data contents from intruder and unauthorized persons. In this paper, a fast technique for text encryption depending on genetic algorithm is presented. The encryption approach is achieved by the genetic operators Crossover and mutation. The encryption proposal technique based on dividing the plain text characters into pairs, and applying the crossover operation between them, followed by the mutation operation to get the encrypted text. The experimental results show that the proposal provides an important improvement in encryption rate with comparatively high-speed processing.
Hiding text in speech signal using K-means, LSB techniques and chaotic maps IJECEIAES
In this paper, a new technique that hides a secret text inside a speech signal without any apparent noise is presented. The technique for encoding the secret text is through first scrambling the text using Chaotic Map, then encoding the scraped text using the Zaslavsky map, and finally hiding the text by breaking the speech signal into blocks and using only half of each block with the LSB, K-means algorithms. The measures (SNR, PSNR, Correlation, SSIM, and MSE) are used on various speech files (“.WAV”), and various secret texts. We observed that the suggested technique offers high security (SNR, PSNR, Correlation, and SSIM) of an encrypted text with low error (MSE). This indicates that the noise level in the speech signal is very low and the speech purity is high, so the suggested method is effective for embedding encrypted text into speech files.
DENIAL OF SERVICE LOG ANALYSIS USING DENSITY K-MEANS METHODArdymulya Iswardani
Denial of service attacks launched by flooding the data on the ftp server causes the server to be unable to
handle requests from legitimate users, one of the techniques in detecting these attacks is by monitoring, but
found several problems including the difficulty in distinguishing the attack and with normal data traffic. So
that the necessary field studies of triage forensics to get a vital information at the scene that is useful in
supporting the overall digital forensics investigation. Triage forensics begins with the log databases which
are then performed by using the grouping density k-means algorithm to produce three levels of danger
(low, medium and high).
Proposed density k-means algorithm using three groups that represent the level of danger. The minimum
value, medium, and maximum of the dataset as early centroid, the data which has minimum distance to the
centroid value specified will join to form a cluster centroid. Data that has been joined in a cluster and then
evaluated the level of density (density) with its center (centroid) using Davies-Bouldin index.
Results of clustering that has been done in the dataset resulted in three clusters, but the level of danger
which successfully identified only two, namely the level of danger of medium and high, the value of DBI
obtained 0.082, indicates that the data used manifold homogeneous, results DBI obtained is also influenced
by the selection of the value of the centroid beginning clustering process
A Real Time Framework of Multiobjective Genetic Algorithm for Routing in Mobi...IDES Editor
Routing in mobile networks is a multiobjective
optimization problem. The problem needs to consider multiple
objectives simultaneously such as Quality of Service
parameters, delay and cost. This paper uses the NSGA-II
multiobjectve genetic algorithm to solve the dynamic shortest
path routing problem in mobile networks and proposes a
framework for real-time software implementation.
Simulations confirm a good quality of solution (route
optimality) and a high rate of convergence.
BIDIRECTIONAL LONG SHORT-TERM MEMORY (BILSTM)WITH CONDITIONAL RANDOM FIELDS (...ijnlc
This study investigates the effectiveness of Knowledge Named Entity Recognition in Online Judges (OJs). OJs are lacking in the classification of topics and limited to the IDs only. Therefore a lot of time is consumed in finding programming problems more specifically in knowledge entities.A Bidirectional Long Short-Term Memory (BiLSTM) with Conditional Random Fields (CRF) model is applied for the recognition of knowledge named entities existing in the solution reports.For the test run, more than 2000 solution reports are crawled from the Online Judges and processed for the model output. The stability of the model is
also assessed with the higher F1 value. The results obtained through the proposed BiLSTM-CRF model are more effectual (F1: 98.96%) and efficient in lead-time.
Table-Based Identification Protocol of Compuatational RFID Tags csandit
Computation RFID (CRFID) expands the limit of traditional RFID by granting computational
capability to RFID tags. RFID tags are powered by Radio-Frequency (RF) energy harvesting.
However, CRFID tags need extra energy and processing time than traditional RFID tags to
sense the environment and generate sensing data. Therefore, Dynamic Framed Slotted ALOHA
(DFSA) protocol for traditional RFID is no longer a best solution for CRFID systems. In this
paper, we propose a table-based CRFID tag identification protocol considering CRFID
operation. An RFID reader sends the message with the frame index. Using the frame index,
CRFID tags calculate the energy requirement and processing time information to the reader
along with data transmission. The RFID reader records the received information of tags into a
table. After table recording is completed, the optimal frame size and proper time interval is
provided based on the table. The proposed CRFID tag identification protocol is shown to
enhance the identification rate and the delay via simulations.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Selective Green Device Discovery for Device to Device CommunicationTELKOMNIKA JOURNAL
The D2D communication is expected to improve devices’ energy-efficiency, which has become a
major requirement of the future wireless network. Before the D2D communication can be performed, the
device discovery between devices must be done. The previous works usually only assumed one mode of
device discovery, i.e. either use network-assisted (with network supervision) or independent (without
network supervision) device. Therefore, we propose a selective device discovery for device-to-device
(D2D) communication that can utilize both device discovery modes and maintain devices’ energyefficiency.
Different from previous works, our proposed method selects the best device discovery mode to
get the best energy-efficiency. Moreover, to further improve the energy-efficiency, our proposed method
also deployed in D2D cluster with multiple cluster heads. The proposed method selects the most suitable
mode using thresholds (cluster energy consumption and new device acceptance) and cluster energy
expectation. Our experiment result indicates that the proposed method provides lowest energy
consumption per new accepted device while compared with schemes with full network-assisted and
independent device discovery in low numbers of new device arrival (for the number of new devices
arrival = 1 ~ 3).
New Heuristic Model for Optimal CRC Polynomial IJECEIAES
Cyclic Redundancy Codes (CRCs) are important for maintaining integrity in data transmissions. CRC performance is mainly affected by the polynomial chosen. Recent increases in data throughput require a foray into determining optimal polynomials through software or hardware implementations. Most CRC implementations in use, offer less than optimal performance or are inferior to their newer published counterparts. Classical approaches to determining optimal polynomials involve brute force based searching a population set of all possible polynomials in that set. This paper evaluates performance of CRC-polynomials generated with Genetic Algorithms. It then compares the resultant polynomials, both with and without encryption headers against a benchmark polynomial.
The evaluation performance of letter-based technique on text steganography sy...journalBEEI
Steganography is a part of information hiding in covering the hidden message in any medium such as text, image, audio, video and others. This paper concerns about the implementation of steganography in text domain called text steganography. It intends to concentrate on letter-based technique as one of the representative techniques in text steganography. This paper displays some techniques of letter-based that is integrated in one system technique displayeed in a logical and physical design. The integrated system is evaluated using some parameter that is used in order to discover the performance in term of capacity after embedding process and the time consuming in the development process. This paper is anticipated to contribute in describing the implementation of the techniques in one system and to display the performance some parameter evaluation.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
Performance Comparison of Automatic Speaker Recognition using Vector Quantiza...CSCJournals
In this paper, three approaches for automatic Speaker Recognition based on Vector quantization are proposed and their performances are compared. Vector Quantization (VQ) is used for feature extraction in both the training and testing phases. Three methods for codebook generation have been used. In the 1st method, codebooks are generated from the speech samples by using the Linde-Buzo-Gray (LBG) algorithm. In the 2nd method, the codebooks are generated using the Kekre’s Fast Codebook Generation (KFCG) algorithm and in the 3rd method, the codebooks are generated using the Kekre’s Median Codebook Generation (KMCG) algorithm. For speaker identification, the codebook of the test sample is similarly generated and compared with the codebooks of the reference samples stored in the database. The results obtained for the three methods have been compared. The results show that KFCG gives better results than LBG, while KMCG gives the best results.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Investigating the Performance of NoC Using Hierarchical Routing ApproachIJERA Editor
The Network-on-Chip (NoC) model has appeared as a revolutionary methodology for incorporatingmany number of intellectual property (IP) blocks in a die. As said by the International Roadmap for Semiconductors (ITRS), it is must to scale down the device size. In order to reduce the device long interconnection should be avoided. For that, new interconnect patterns are need. Three-dimensional ICs are proficient of achieving superior performance, resistance against noise and lower interconnect power consumption compared to traditional planar ICs. In this paper, network data routed by Hierarchical methodology. We are analyzing total number of logic gates and registers, power consumption and delay when different bits of data transmitted using Quartus II software.
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...CSCJournals
Handwritten text and character recognition is a challenging task compared to recognition of handwritten numeral and computer printed text due to its large variety in nature. As practical pattern recognition problems uses bulk data and there is a one step self sufficient deterministic theory to resolve recognition problems by calculating inverse of Hessian Matrix and multiplication the inverse matrix it with first order local gradient vector. But in practical cases when neural network is large the inversing operation of the Hessian Matrix is not manageable and another condition must be satisfied the Hessian Matrix must be positive definite which may not be satishfied. In these cases some repetitive recursive models are taken. In several research work in past decade it was experienced that Neural Network based approach provides most reliable performance in handwritten character and text recognition but recognition performance depends upon some important factors like no of training samples, reliable features and no of features per character, training time, variety of handwriting etc. Important features from different types of handwriting are collected and are fed to the neural network for training. It is true that more no of features increases test efficiency but it takes longer time to converge the error curve. To reduce this training time effectively proper train algorithm should be chosen so that the system provides best train and test efficiency in least possible time that is to provide the system fastest intelligence. We have used several second order conjugate gradient algorithms for training of neural network. We have found that Scaled Conjugate Gradient Algorithm , a second order training algorithm as the fastest for training of neural network for our application. Training using SCG takes minimum time with excellent test efficiency. A scanned handwritten text is taken as input and character level segmentation is done. Some important and reliable features from each character are extracted and used as input to a neural network for training. When the error level reaches into a satisfactory level (10 -12 ) weights are accepted for testing a test script. Finally a lexicon matching algorithm solves the minor misclassification problems.
Survey on Error Control Coding TechniquesIJTET Journal
Abstract - Error Control Coding techniques used to ensure that the information received is correct and has not been corrupted, owing to the environmental defects and noises occurring during transmission or the data read operation from Memory. Environmental interference and physical effects defects in the communication medium can cause random bit errors during data transmission. While, data corruption means that the detection and correction of bytes by applying modern coding techniques. Error control coding divided into automatic repeat request (ARQ) and forward error correction (FEC).First of all, In ARQ, when the receiver detects an error in the receiver; it requests back the sender to retransmit the data. Second, FEC deals with system of adding redundant data in a message and also it can be recovered by a receiver even when a number of errors were introduce either during the process of data transmission, or on the storage. Therefore, error detection and correction of burst errors can be obtained by Reed-Solomon code. Moreover, the Low-Density Parity Check code furnishes outstanding performance that sparingly near to the Shannon limit.
This paper introduces a Simulink model design for a modified fountain code. The code is a new version of the traditional Luby transform (LT) codes.
The design constructs the blocks required for generation of the generator matrix of a limited-degree-hopping-segment Luby transform (LDHS-LT) codes. This code is especially designed for short length data files which have assigned a great interest for wireless sensor networks. It generates the degrees in a predetermined sequence but random generation and partitioned the data file in segments. The data packets selection has been made serialy according to the integer generated from both degree and segment generators. The code is tested using Monte Carlo simulation approach with the conventional code generation using robust soliton distribution (RSD) for degree generation, and the simulation results approve better performance with all testing parameter.
CODING SCHEMES FOR ENERGY CONSTRAINED IOT DEVICESijmnct
This paper investigates the application of advanced forward error correction techniques mainly: lowdensity parity checks (LDPC) code and polar code for IoT networks. These codes are under consideration
for 5G systems. Different code parameters such as code rate and a number of decoding iterations are used
to show their effect on the performance of the network. LDPC is performed better than polar code, over the
IoT network scenario considered in the work, for the same coding rate and the number of decoding
iterations. Considering bit error rate (BER) performance, LDPC with rate1/3 provided an improvement of
up to 2.6 dB for additive white Gaussian noise (AWGN) channel, and 2 dB for SUI-3 (frequency selective
fading channel model). LDPC code gives an improvement in throughput of about 12% as compared to
polar code with a coding rate of 2/3 over AWGN channel. The corresponding values over SUI-3 channel
are about 10%. Finally, in comparison with LDPC, polar code shows better energy saving for large
number of decoding iterations and high coding rates.
CODING SCHEMES FOR ENERGY CONSTRAINED IOT DEVICESijmnct_journal
This paper investigates the application of advanced forward error correction techniques mainly: lowdensity parity checks (LDPC) code and polar code for IoT networks. These codes are under consideration for 5G systems. Different code parameters such as code rate and a number of decoding iterations are used
to show their effect on the performance of the network. LDPC is performed better than polar code, over the IoT network scenario considered in the work, for the same coding rate and the number of decoding iterations. Considering bit error rate (BER) performance, LDPC with rate1/3 provided an improvement of
up to 2.6 dB for additive white Gaussian noise (AWGN) channel, and 2 dB for SUI-3 (frequency selective fading channel model). LDPC code gives an improvement in throughput of about 12% as compared to polar code with a coding rate of 2/3 over AWGN channel. The corresponding values over SUI-3 channel
are about 10%. Finally, in comparison with LDPC, polar code shows better energy saving for large number of decoding iterations and high coding rates.
A Real Time Framework of Multiobjective Genetic Algorithm for Routing in Mobi...IDES Editor
Routing in mobile networks is a multiobjective
optimization problem. The problem needs to consider multiple
objectives simultaneously such as Quality of Service
parameters, delay and cost. This paper uses the NSGA-II
multiobjectve genetic algorithm to solve the dynamic shortest
path routing problem in mobile networks and proposes a
framework for real-time software implementation.
Simulations confirm a good quality of solution (route
optimality) and a high rate of convergence.
BIDIRECTIONAL LONG SHORT-TERM MEMORY (BILSTM)WITH CONDITIONAL RANDOM FIELDS (...ijnlc
This study investigates the effectiveness of Knowledge Named Entity Recognition in Online Judges (OJs). OJs are lacking in the classification of topics and limited to the IDs only. Therefore a lot of time is consumed in finding programming problems more specifically in knowledge entities.A Bidirectional Long Short-Term Memory (BiLSTM) with Conditional Random Fields (CRF) model is applied for the recognition of knowledge named entities existing in the solution reports.For the test run, more than 2000 solution reports are crawled from the Online Judges and processed for the model output. The stability of the model is
also assessed with the higher F1 value. The results obtained through the proposed BiLSTM-CRF model are more effectual (F1: 98.96%) and efficient in lead-time.
Table-Based Identification Protocol of Compuatational RFID Tags csandit
Computation RFID (CRFID) expands the limit of traditional RFID by granting computational
capability to RFID tags. RFID tags are powered by Radio-Frequency (RF) energy harvesting.
However, CRFID tags need extra energy and processing time than traditional RFID tags to
sense the environment and generate sensing data. Therefore, Dynamic Framed Slotted ALOHA
(DFSA) protocol for traditional RFID is no longer a best solution for CRFID systems. In this
paper, we propose a table-based CRFID tag identification protocol considering CRFID
operation. An RFID reader sends the message with the frame index. Using the frame index,
CRFID tags calculate the energy requirement and processing time information to the reader
along with data transmission. The RFID reader records the received information of tags into a
table. After table recording is completed, the optimal frame size and proper time interval is
provided based on the table. The proposed CRFID tag identification protocol is shown to
enhance the identification rate and the delay via simulations.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Selective Green Device Discovery for Device to Device CommunicationTELKOMNIKA JOURNAL
The D2D communication is expected to improve devices’ energy-efficiency, which has become a
major requirement of the future wireless network. Before the D2D communication can be performed, the
device discovery between devices must be done. The previous works usually only assumed one mode of
device discovery, i.e. either use network-assisted (with network supervision) or independent (without
network supervision) device. Therefore, we propose a selective device discovery for device-to-device
(D2D) communication that can utilize both device discovery modes and maintain devices’ energyefficiency.
Different from previous works, our proposed method selects the best device discovery mode to
get the best energy-efficiency. Moreover, to further improve the energy-efficiency, our proposed method
also deployed in D2D cluster with multiple cluster heads. The proposed method selects the most suitable
mode using thresholds (cluster energy consumption and new device acceptance) and cluster energy
expectation. Our experiment result indicates that the proposed method provides lowest energy
consumption per new accepted device while compared with schemes with full network-assisted and
independent device discovery in low numbers of new device arrival (for the number of new devices
arrival = 1 ~ 3).
New Heuristic Model for Optimal CRC Polynomial IJECEIAES
Cyclic Redundancy Codes (CRCs) are important for maintaining integrity in data transmissions. CRC performance is mainly affected by the polynomial chosen. Recent increases in data throughput require a foray into determining optimal polynomials through software or hardware implementations. Most CRC implementations in use, offer less than optimal performance or are inferior to their newer published counterparts. Classical approaches to determining optimal polynomials involve brute force based searching a population set of all possible polynomials in that set. This paper evaluates performance of CRC-polynomials generated with Genetic Algorithms. It then compares the resultant polynomials, both with and without encryption headers against a benchmark polynomial.
The evaluation performance of letter-based technique on text steganography sy...journalBEEI
Steganography is a part of information hiding in covering the hidden message in any medium such as text, image, audio, video and others. This paper concerns about the implementation of steganography in text domain called text steganography. It intends to concentrate on letter-based technique as one of the representative techniques in text steganography. This paper displays some techniques of letter-based that is integrated in one system technique displayeed in a logical and physical design. The integrated system is evaluated using some parameter that is used in order to discover the performance in term of capacity after embedding process and the time consuming in the development process. This paper is anticipated to contribute in describing the implementation of the techniques in one system and to display the performance some parameter evaluation.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
Performance Comparison of Automatic Speaker Recognition using Vector Quantiza...CSCJournals
In this paper, three approaches for automatic Speaker Recognition based on Vector quantization are proposed and their performances are compared. Vector Quantization (VQ) is used for feature extraction in both the training and testing phases. Three methods for codebook generation have been used. In the 1st method, codebooks are generated from the speech samples by using the Linde-Buzo-Gray (LBG) algorithm. In the 2nd method, the codebooks are generated using the Kekre’s Fast Codebook Generation (KFCG) algorithm and in the 3rd method, the codebooks are generated using the Kekre’s Median Codebook Generation (KMCG) algorithm. For speaker identification, the codebook of the test sample is similarly generated and compared with the codebooks of the reference samples stored in the database. The results obtained for the three methods have been compared. The results show that KFCG gives better results than LBG, while KMCG gives the best results.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Investigating the Performance of NoC Using Hierarchical Routing ApproachIJERA Editor
The Network-on-Chip (NoC) model has appeared as a revolutionary methodology for incorporatingmany number of intellectual property (IP) blocks in a die. As said by the International Roadmap for Semiconductors (ITRS), it is must to scale down the device size. In order to reduce the device long interconnection should be avoided. For that, new interconnect patterns are need. Three-dimensional ICs are proficient of achieving superior performance, resistance against noise and lower interconnect power consumption compared to traditional planar ICs. In this paper, network data routed by Hierarchical methodology. We are analyzing total number of logic gates and registers, power consumption and delay when different bits of data transmitted using Quartus II software.
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...CSCJournals
Handwritten text and character recognition is a challenging task compared to recognition of handwritten numeral and computer printed text due to its large variety in nature. As practical pattern recognition problems uses bulk data and there is a one step self sufficient deterministic theory to resolve recognition problems by calculating inverse of Hessian Matrix and multiplication the inverse matrix it with first order local gradient vector. But in practical cases when neural network is large the inversing operation of the Hessian Matrix is not manageable and another condition must be satisfied the Hessian Matrix must be positive definite which may not be satishfied. In these cases some repetitive recursive models are taken. In several research work in past decade it was experienced that Neural Network based approach provides most reliable performance in handwritten character and text recognition but recognition performance depends upon some important factors like no of training samples, reliable features and no of features per character, training time, variety of handwriting etc. Important features from different types of handwriting are collected and are fed to the neural network for training. It is true that more no of features increases test efficiency but it takes longer time to converge the error curve. To reduce this training time effectively proper train algorithm should be chosen so that the system provides best train and test efficiency in least possible time that is to provide the system fastest intelligence. We have used several second order conjugate gradient algorithms for training of neural network. We have found that Scaled Conjugate Gradient Algorithm , a second order training algorithm as the fastest for training of neural network for our application. Training using SCG takes minimum time with excellent test efficiency. A scanned handwritten text is taken as input and character level segmentation is done. Some important and reliable features from each character are extracted and used as input to a neural network for training. When the error level reaches into a satisfactory level (10 -12 ) weights are accepted for testing a test script. Finally a lexicon matching algorithm solves the minor misclassification problems.
Survey on Error Control Coding TechniquesIJTET Journal
Abstract - Error Control Coding techniques used to ensure that the information received is correct and has not been corrupted, owing to the environmental defects and noises occurring during transmission or the data read operation from Memory. Environmental interference and physical effects defects in the communication medium can cause random bit errors during data transmission. While, data corruption means that the detection and correction of bytes by applying modern coding techniques. Error control coding divided into automatic repeat request (ARQ) and forward error correction (FEC).First of all, In ARQ, when the receiver detects an error in the receiver; it requests back the sender to retransmit the data. Second, FEC deals with system of adding redundant data in a message and also it can be recovered by a receiver even when a number of errors were introduce either during the process of data transmission, or on the storage. Therefore, error detection and correction of burst errors can be obtained by Reed-Solomon code. Moreover, the Low-Density Parity Check code furnishes outstanding performance that sparingly near to the Shannon limit.
This paper introduces a Simulink model design for a modified fountain code. The code is a new version of the traditional Luby transform (LT) codes.
The design constructs the blocks required for generation of the generator matrix of a limited-degree-hopping-segment Luby transform (LDHS-LT) codes. This code is especially designed for short length data files which have assigned a great interest for wireless sensor networks. It generates the degrees in a predetermined sequence but random generation and partitioned the data file in segments. The data packets selection has been made serialy according to the integer generated from both degree and segment generators. The code is tested using Monte Carlo simulation approach with the conventional code generation using robust soliton distribution (RSD) for degree generation, and the simulation results approve better performance with all testing parameter.
CODING SCHEMES FOR ENERGY CONSTRAINED IOT DEVICESijmnct
This paper investigates the application of advanced forward error correction techniques mainly: lowdensity parity checks (LDPC) code and polar code for IoT networks. These codes are under consideration
for 5G systems. Different code parameters such as code rate and a number of decoding iterations are used
to show their effect on the performance of the network. LDPC is performed better than polar code, over the
IoT network scenario considered in the work, for the same coding rate and the number of decoding
iterations. Considering bit error rate (BER) performance, LDPC with rate1/3 provided an improvement of
up to 2.6 dB for additive white Gaussian noise (AWGN) channel, and 2 dB for SUI-3 (frequency selective
fading channel model). LDPC code gives an improvement in throughput of about 12% as compared to
polar code with a coding rate of 2/3 over AWGN channel. The corresponding values over SUI-3 channel
are about 10%. Finally, in comparison with LDPC, polar code shows better energy saving for large
number of decoding iterations and high coding rates.
CODING SCHEMES FOR ENERGY CONSTRAINED IOT DEVICESijmnct_journal
This paper investigates the application of advanced forward error correction techniques mainly: lowdensity parity checks (LDPC) code and polar code for IoT networks. These codes are under consideration for 5G systems. Different code parameters such as code rate and a number of decoding iterations are used
to show their effect on the performance of the network. LDPC is performed better than polar code, over the IoT network scenario considered in the work, for the same coding rate and the number of decoding iterations. Considering bit error rate (BER) performance, LDPC with rate1/3 provided an improvement of
up to 2.6 dB for additive white Gaussian noise (AWGN) channel, and 2 dB for SUI-3 (frequency selective fading channel model). LDPC code gives an improvement in throughput of about 12% as compared to polar code with a coding rate of 2/3 over AWGN channel. The corresponding values over SUI-3 channel
are about 10%. Finally, in comparison with LDPC, polar code shows better energy saving for large number of decoding iterations and high coding rates.
Vitality productivity Multipath Routing for Wireless Sensor Networks: A Genet...dbpublications
Abstract-The two factors included for deployment of any Wireless Sensor Network, those factors are efficient energy and fault tolerance. An efficient solution for fault tolerance is the Multipath routing in WSNs. Genetic Algorithm is based on the meta-heuristic search technique. Base station (BS) already prepared routing schedule in its routing table, all the nodes share it with the entire network. In proposed algorithm various parameters are used for efficient fitness function such as distance between sender and receiver nodes, distance between BS to hop node and on the number of hop to send data from next hop node to the BS. Simulation and evaluation are tested with various performance metrics in the proposed algorithm.
Turbo codes are error-correcting codes with performance that is close to the
Shannon theoretical limit (SHA). The motivation for using turbo codes is
that the codes are an appealing mix of a random appearance on the channel
and a physically realizable decoding structure. The communication systems
have the problem of latency, fast switching, and reliable data transfer. The
objective of the research paper is to design and turbo encoder and decoder
hardware chip and analyze its performance. Two convolutional codes are
concatenated concurrently and detached by an interleaver or permuter in the
turbo encoder. The expected data from the channel is interpreted iteratively
using the two related decoders. The soft (probabilistic) data about an
individual bit of the decoded structure is passed in each cycle from one
elementary decoder to the next, and this information is updated regularly.
The performance of the chip is also verified using the maximum a posteriori
(MAP) method in the decoder chip. The performance of field-programmable
gate array (FPGA) hardware is evaluated using hardware and timing
parameters extracted from Xilinx ISE 14.7. The parallel concatenation offers
a better global rate for the same component code performance, and reduced
delay, low hardware complexity, and higher frequency support.
An efficient reconfigurable code rate cooperative low-density parity check co...IJECEIAES
In recent days, extensive digital communication process has been performed. Due to this phenomenon, a proper maintenance of authentication, communication without any overhead such as signal attenuation code rate fluctuations during digital communication process can be minimized and optimized by adopting parallel encoder and decoder operations. To overcome the above-mentioned drawbacks by using proposed reconfigurable code rate cooperative (RCRC) and low-density parity check (LDPC) method. The proposed RCRC-LDPC is capable to operate over gigabits/sec data and it effectively performs linear encoding, dual diagonal form, widens the range of code rate and optimal degree distribution of LDPC mother code. The proposed method optimize the transmission rate and it is capable to operate on 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. The proposed method optimizes the transmission rate and is capable to operate on a 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. the proposed method's implementation has been carried out using MATLAB and as per the simulation result, the proposed method is capable of reaching a throughput efficiency greater than 8.2 (1.9) gigabits per second with a clock frequency of 160 MHz.
A survey on power- efficient Forward Error Correction scheme for Wireless Sen...dbpublications
In this recent year, wireless sensor network are widely being used due to its attractive features. Reliable network in wireless sensor network is very important for a broad range of network. Since wireless sensor network consist of a large number of small sensors which has limited battery, power consumption is an issue. The network lifetime depends on the usage of power resources. In this paper, we proposed forward error correction (FEC) which is error controlling method that adds redundancy to the transmitted data so that the receiver can get errorless data. The results provides high reliability in wireless sensor network.
A New Bit Split and Interleaved Channel Coding for MIMO DecoderIJARBEST JOURNAL
Authors:-C. Amar Singh Feroz1, S. Karthikeyan2, K. Mala3
Abstract– In wireless communications, the use of multiple antennas at both the
transmitter and receiver is a key technology to enable high data transmission without
additional bandwidth or transmit power. MIMO schemes are widely used in many
wireless standards, allowing higher throughput using spatial multiplexing techniques.
Bit split mapping based on JDD is designed. Here ETI coding is used for encoding and
Viterbi is used for decoding. Experimental results for 16-QAM and 64 QAM with the
code rate of ½ and 1/3 codes are shown to verify the proposed approach and to elucidate
the design tradeoffs in terms the BER performance. This bit split mapping based JDD
algorithm can greatly improve BER performance with different system settings.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and
performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar
codes in series to create robust error-correcting codes. The primary objective here is to optimize the
behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving,
a technique that rearranges bits to maximize the separation between originally neighboring symbols. This
rearrangement is instrumental in converting error clusters into distributed errors across the entire
sequence. To evaluate their performance, we proposed to model a communication system with seven
components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel
decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different
block lengths and code rates. Next, we compare the bit error rate (BER) performance between our
proposed method and polar codes.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.
A Hybrid Approach for Ensuring Security in Data Communication cscpconf
For a very long time, various forms of steganographic and cryptographic techniques have been used to ensure security in data communication. Whereas steganography is the art of hiding the fact that any communication is taking place, cryptography on the other hand ensures data security by changing the very form of the data being communicated by using a symmetric or an asymmetric key. But, both the methods are susceptible to being weakened by a challenger. In
steganography, there is always a possibility of detection of the presence of a message by the opponent and most of the cryptographic techniques are vulnerable to disclosure of the key. This paper proposes a hybrid approach where in image steganography and cryptography are combined to protect the sensitive data thereby ensuring improved security in data
communication. To find the impact of the same, a simulator was designed in MATLAB and corresponding time complexities were recorded. The simulation results depict that this hybrid
technique although increases the time complexity but ensures an enhanced security in data communication.
Design and implementation of log domain decoder IJECEIAES
Low-Density-Parity-Check (LDPC) code has become famous in communications systems for error correction, as an advantage of the robust performance in correcting errors and the ability to meet all the requirements of the 5G system. However, the mot challenge faced researchers is the hardware implementation, because of higher complexity and long run-time. In this paper, an efficient and optimum design for log domain decoder has been implemented using Xilinx system generator with FPGA device Kintex7 (XC7K325T-2FFG900C). Results confirm that the proposed decoder gives a Bit Error Rate (BER) very closed to theory calculations which illustrate that this decoder is suitable for next generation demand which needs a high data rate with very low BER.
implementation of area efficient high speed eddr architectureKumar Goud
Abstract-This project presents an EDDR design, based on the residue-and-quotient (RQ) code, to embed into motion estimation (ME) for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty. The functional verification and synthesis can be done by Xilinx ISE. That is when compare to the existing design the implemented design area and timing will be reduced.
Index Terms—Area overhead, data recovery, error detection, reliability, residue-and-quotient (RQ) code, Xilinx ISE
Belief Propagation Decoder for LDPC Codes Based on VLSI Implementationinventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A 2 stage data word packet communication decoder using rate 1-by-3 viterbi de...eSAT Journals
Abstract
In the field of consumer electronics the high speed communication technology applications based on hardware and software control are playing a vital role in establishing the benchmarks for catering the operational requirements of the electronic hardware to fulfil the consumer requirements in wired and wireless communication. In the modern era of communication electronics decoding and encoding of any data(s) using high speed and low power features of FPGA devices [1] based on VLSI technology offers less area, hardware portability, data security, high speed network connectivity [2], data error removal capability, complex algorithm realization, etc. Viterbi decoder is a high rate decoder that is very commonly and effectively used method in modern communication hardware. It involves Trellis coded modulation (TCM) scheme for decoding the data. The viterbi decoder is an attempt to reduce the power, speed [1], and cost as compared to normal decoders for wired and wireless communication. The work in this paper proposes a improved data error identification probability design of Viterbi decoders for communication systems with a low power operational performance. The proposed design combines the error identification capability of the viterbi decoder with parity decoder to improve the probability of the overall system in identifying the error during the communication process. Among various functional blocks in the Viterbi decoder, both hardware complexity and decoding speed highly depends on the architecture of the Decoder. The operational blocks of viterbi decoder are combined with parity testing block to identify the error in the viterbi decoded data using parity bit. The present design proposes a multi-stage pipelined architecture of decoder. The former stage is the viterbi decoding stage and the later stage is the parity decoding stage for the identification of error in the communicated data. Any Odd number of errors occuring in the recovered data from the former decoding stage can be identified using the later decoding stage. A general solution to derive the communication using conventional viterbi decoder is also given in this paper. Implementation result of proposed design for a rate 1/3 convolutional code is compared with the conventional design. The design of proposed algorithm is simulated and synthesized successfully Xilinx ISE Tool [3] on Xilinx Spartan 3E FPGA.
Keywords: ACS (Add-Compare-Select), Convolutional Code Rate, Error probability, FPGA, Low Power, Parity Encoder, Pipelining, Trace Back, Viterbi Decoder, Xilinx ISE.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
2. Int J Elec & Comp Eng ISSN: 2088-8708
Improving the data recovery for short length LT codes (Nadhir Ibrahim Abdulkhaleq)
1973
the recovered information packet were released. Moreover, this procedure is repeated until all the information
packets are recovered.
It is obvious from understanding the decoding process, that the generation of degrees is the core for
successful data recovery. Many attempts are presented to modify the conventional scheme of degree
generations. The first design for degree distribution used was the ideal soliton distribution (ISD) and robust
soliton distribution (RSD) [1]. The features of the RSD are suitable for bulk data files [1, 2, 4]. Esa Hyytiä
et al [5] used a sampling approach to determine the optimized degree distribution. They found that for large
information sequences, i.e., thousands of packets, the RSD shows good performance. In [6] the authors
investigated the optimization criteria of degree distributions for small file sizes. They concluded that,
in a well-design distribution just a few parameters require to be controlled to insure almost maximal
performance. Chen Z and Zhou Q [7], suggested an LT code with a revised RSD used for degree generation
based on using Kent chaotic map and pseudo-random number generator. Their idea based on joining both of
the probability functions but with no attention for the parameter. The result of this new degree
generation is shown in having more small degree values. C.M. Chen et al in [8] used a covariance matrix
adaptation as evolution strategy (CMA-ES) to enhance the degree values for better performance.
Their approach for the new distribution generates the degrees adaptively without changing the main
parameters. In [9] the author presents another attempt to improve the ability of degree generation to get better
encoding-decoding frame. They enforce the distribution to have an expected number of degree-one coded
packets as a division of data length. A state of the art work on fountain codes was done in [10] where
a memory-based rateless encoding method is proposed. In this new data allocation method, the degree one
coded symbol is isolated and connected to the data packets with the highest connection. The previous
literature review gives a bright vision for the concern to modify the RSD as an optimal degree distribution.
As a new attempt in this paper, a deterministic manner for both, the generation of the degrees and the choice
of the data packets which form the coded packets is introduced.
(a) (b) (c) (d)
(e) (f) (g) (h)
Figure 1. Encoding and decoding operations for lt code )
In the next section a review for degree generations is presented. Section 3 is dedicated for our
deterministic degree generation. Testing the ability of this new degree generation to enhance the code
performance is achieved in section 4. The last section is used to conclude the whole idea in few condensed
lines.
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1972 - 1979
1974
2. DEGREE GENERATION APPROACHES
LT code design mainly depend on the quality of the degree generation method. The design has two
features, the degree distribution and the way of data selection. The degree generation in all conventional
approaches for LT code design are using random degree generation and also the data selection is applied
randomly. After the introduction of ideal soliton distribution and robust soliton distribution by Luby in [1]
many other studies [3, 5-9] on RSD were performed. And all these studies focused on improving RSD,
i.e., on improving the right-hand side distribution. While in [10] Khaled et al introduced a memory based
rateless encoding operation yielding non-uniform left hand side distribution which shows better performance
in terms of BER. In this paper, we propose a new approach for rateless encoding operation. The proposed
approach is based on deterministic construction of the generator matrix with non-uniform left hand and right
hand side distributions. Below we discuss, the degree generation methods that been used as a comparing tool
for our proposed one.
2.1. Ideal soliton distribution
This distribution offers an ideal distribution but not practical, because at certain point in
the decoding process a degree one coded symbol could be lost with high probability. It is defined as:
{ (1)
where is an integer represent the degree and is the length of information message.
2.2. Robust soliton distribution
In this modified degree distribution, the average number of degree one coded packets has been
increased [1]. The robust soliton distribution used the in (1) to construct its new distribution as:
(2)
where the normalization factor ∑ and is given as:
{
(3)
where √ is the average number of coded packets with degree one. The constant has
the value between 0 and 1, while represents the probability of decoder failure.
2.3. Non-uniform data selection (NUDS)
Approximately all the previous researches worked on improving the right-hand distribution.
However, in [10] a memory based encoding approach, resulting in better performance with the help of
memory tracking for the data packets degrees. Below, the algorithm proposed in [10] is outlined.
Algorithm 1 (NUDS)
1: Repeat
2: Generate a degree using RSD.
3: if
4: Select the data symbol with
largest degree, i.e., select the data
symbol with the most number of
connections.
5: else
6: Select data packets uniformly.
7: the selected data symbols to
form the code symbol
8: end if
9: Until (Acknowledgement signal is received destination).
4. Int J Elec & Comp Eng ISSN: 2088-8708
Improving the data recovery for short length LT codes (Nadhir Ibrahim Abdulkhaleq)
1975
3. DETERMINISTIC DEGREE GENERATION (DDG)
RSD is considered as an optimal degree distribution for large file sizes, but it may suffer from
certain drawbacks when dealing with short size data files [11]. The reason for that is the amount of extra
coded packets that have to be collected to insure successful decoding. This extra needed packet for LT codes
has been approximated to be , while it could be enlarged for small message sizes to reach
,[12]. Sevral attempts have been presented in the field of improving the communication
systems using channel coding [13-18]. In addition, motivating from the decoding treatment which has done
by the pattern recognition (PR) [19-25] to resume decoding when the decoder halt, and the work introduced
by [10] for the NUDS, we propose a deterministic encoding (DDG) approach for LT like codes.
The proposed approach is designed to be used especially for short frame lengths. In our deterministic degree
generation, in case of having data packets, the degrees are generated following this formula:
{ (4)
Where is the repetition period value chosen to be between , m is an integer take
the values and is the size of the code length. For the information sequence consisting of
symbols, i.e., for , with ( ) the coded symbols
are formed using:
(5)
It is clear from (5) that the generator matrix is , however the extra needed coded packets are
generated using (4) with repetition of the same allocation of the data symbols illustrated in (5).
More immunity for such encoding scheme in case of bad channel conditions and in order to prevent losing
the same data packet, a Time-Hopping-Encoding scheme is used to produce a Time-Hopping-encoding
scheme. In this scheme, the encoding matrix is altered between two shapes, one is the original
deterministic and the other is the random shuffle of it , as shown in Figure 2:
Figure 2. Deterministic degree generation frame
Where can be adjusted as required by the transmitter. The proposed approach is explained in
Algorithm 2:
Algorithm 2 Deterministic Degree generation (DDG)
1: Repeat
2: for
3: if
4:
5: else
6:
7: end
8: end
9: for
10: if 1
11:
12: else
( ) ( ) ( )
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1972 - 1979
1976
13:
14: end
15: end
16: Until (Acknowledgement is obtained from receiver)
4. DECODING IN THE ABSENCE OF DEGREE-ONE CODE PACKET
LT-like codes suffer from the absence of degree-one coded packets during the decoding operation
especially for short length messages. If degree-one coded packet is not found at any stage of the decoding
operation, decoding failure is declared. Our work in [19], succeeded in breaking the decoder failure in a way
that even there is no degree-one coded symbol decoding operation can be continued by fetching for certain
configuration patterns inside the remaining coded symbols matrix. For this Pattern Recognition (PR)
proposed approach, assume that there is no degree-one coded symbol at any stage of the decoding operation
but there are two code-words and which are formed as:
(6)
where it is obvious that the degree of the code-words can be reduced to ‘1’ using:
(7)
which result is . Getting motivation from this approach, we added further fetching steps by trying
to reduce the degree of the coded symbols which are contained in each other, algorithm 3 illustrate this
approach, [19]:
Algorithm 3 Decoding Stuck Removal Approach
1: for iteration number=1: last % last may be any number (2, 3, 4)
2: Repeat
3:
4:
5: for any other code-words other than
6: if (deg ( ) equal 1, then
7: ; flag
8: break; resume BP decoding
9: end if
10:end-for
11: ,return to step 4
12: if flag success repeat steps (4-10) to decrease the degrees and replace
step 6 to (deg( )<deg ( ))
13: end if
14: until (All degrees are tried.)
15: end-for
5. RESULTS AND ANALYSIS
The LT code performance using BP algorithm is compared in an erasure channel environment using
three types of degree distributions which are DDG, RSD and NUDS. For the RSD the parameters are chosen
as , . The message length is chosen as 32 packets (the packet length could be with any
length, in our simulation it is represented by 1 bit). For the binary erasure channel erasure probability
is employed. Computer simulations are performed such that for each rate point on the graph
( is checked until 100 erroneous frames are received. It is seen in Figure 3 that the proposed
method is better than that of the LT-RSD and NUDS in terms of the number of unrecovered data packets
for all code rates.
The performance improvement for LT code is obvious for all distribution types under study with
the proposed decoding stuck removal algorithm BP-PR as shown in Figure 4. It is clear from Figure 4 that
the performance improvement for LT-RSD and NUDS is more compared to that on DDG. However,
DDG still gets the best performance. Another performance comparison graph is given in Figure 5 where
success rate is defined as:
6. Int J Elec & Comp Eng ISSN: 2088-8708
Improving the data recovery for short length LT codes (Nadhir Ibrahim Abdulkhaleq)
1977
(8)
It is clear from Figure 5 that DDG approach recovers all the information frames at a rate of
while the other distributons have approximately the same performance and they do need to collect more
overhead to reach the achievement of our DDG.
Figure 3. Unrecovered packets results, using BP, RSD of ( , , )
Figure 4. Unrecovered packets results, using BP-PR, RSD of ( , , )
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1972 - 1979
1978
Figure 5. Successful decoding rates, using BP-PR, RSD of ( , , )
6. CONCLUSION
In this paper we proposed a time hopping deterministic degree generation encoding approach for
the forming of LT-like codes especially for small data files. It is shown via simulation results that
the proposed approach is better in terms of minimum number of unrecovered packets and overhead amount
required for full recovery of data packets than that of the classical LT-RSD and NUDS. The flexibility
features of the proposed approach support the design to overcome the problem of extra needed overhead for
the case of short length LT codes. For the decoding operation BP algorithm is employed supported by
the enhancement algorithm of pattern recognition (BP-PR). It has been approved that BP-PR has little
improvement to the proposed DDG-LT code compared to the clear improvement shown for the classical RSD
and NUDS-LT codes.
REFERENCES
[1] Luby M., "LT codes," In The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002,
Proceedings, IEEE, pp. 271-280, 2002.
[2] Shokrollahi A., "Raptor codes," IEEE/ACM Transactions on Networking (TON). vol. 1(14), pp. 2551-2267.
[3] Jafarizadeh S, & Jamalipour A., "An exact solution to degree distribution optimization in LT codes," In IEEE
Wireless Communications and Networking Conference (WCNC), IEEE, pp. 764-768, 2014.
[4] MacKay DJ, "Information theory, inference and learning algorithms," Cambridge University Press, 2003.
[5] Hyytiä E, Tirronen T, Virtamo J., "Optimizing the degree distribution of LT codes with an importance sampling
approach," In 6th International Workshop on Rare Event Simulation, pp. 56-66, 2006.
[6] Hyytia E, Tirronen T, Virtamo J., "Optimal degree distribution for LT codes with small message length," In 26th
IEEE International Conference on Computer Communications, IEEE, pp. 2576-2580, 2007.
[7] Chen Z. & Zhou Q., "Implementation of it codes with a revised robust soliton distribution by using kent chaotic
map," In 2010 International Workshop on Chaos-Fractal Theories and Applications, IEEE, pp. 149-153, 2010.
[8] Chen C. M., Chen Y. P., Shen T. C., Zao J. K., "On the optimization of degree distributions in LT code with
covariance matrix adaptation evolution strategy," In IEEE Congress on Evolutionary Computation, IEEE,
pp. 1-8, 2010.
[9] Yen K. K., Liao Y. C., Chen C. L, Chang H. C., "Adjusted robust soliton distribution (ARSD) with reshaped ripple
size for LT codes," In 2013 International Conference on Wireless Communications and Signal Processing, IEEE,
pp. 1-6, 2013.
[10] Hayajneh KF, Yousefi S, Valipour M., "Improved finite-length Luby-transform codes in the binary erasure
channel," IET Communications, vol. 9(8), pp. 1122-1130, 2015.
[11] Zao J. K, Hornansky M, Diao P. L., "Design of optimal short-length LT codes using evolution strategies," In 2012
IEEE Congress on Evolutionary Computation, IEEE, pp. 1-9, 2012.
[12] Wang X, Willig A, Woodward G., "Improving fountain codes for short message lengths by adding memory,"
In 2013 IEEE Eighth International Conference on Intelligent Sensors, IEEE, pp. 189-194, 2013.
8. Int J Elec & Comp Eng ISSN: 2088-8708
Improving the data recovery for short length LT codes (Nadhir Ibrahim Abdulkhaleq)
1979
[13] Sukar M. & Pal M., "SC-FDMA & OFDMA in LTE physical layer," International Journal of Engineering Trends
and Technology (IJETT), vol. 12(2), pp. 74-85, 2014.
[14] Sanghvi A, Mishra N, Waghmode R, Talele K., "Performance of reed-solomon codes in AWGN channel,"
International Journal of Electronics and Communication Engineering, vol. 4(3), pp. 259-266, 2011.
[15] Hussain G. A. & Audah L., "RS codes for downlink LTE system over LTE-MIMO channel," TELKOMNIKA
Indonesian Journal of Electrical Engineering, vol. 16(6), pp. 2563-2569, 2018.
[16] Adiono T, Ramdani A, Putra R., "Reversed-trellis tail-biting convolutional code (RT-TBCC) decoder architecture
design for LTE," International Journal of Electrical and Computer Engineering (IJECE), vol. 8(1),
pp. 198-209, 2018.
[17] Wang J., Li J., Cai C., "Two novel decoding algorithms for turbo codes based on taylor series in 3GPP LTE
system," TELKOMNIKA Indonesian Journal of Electrical Engineering, vol. 12(5), pp. 3467-3475, 2014.
[18] Sirait F., Dani A., Pangaribowo T., "Capacity improvement and protection of LTE network on ethernet based
technique," TELKOMNIKA Telecommunication Computing Electronics and Control, vol. 16(1), pp. 200-209, 2018.
[19] Abdulkhaleq N. I., Gazi O., "Decoding of lt‐like codes in the absence of degree‐one code symbols," ETRI Journal,
vol. 38(5), pp.896-902, 2016.
[20] Abdulkhaleq N. I., & Gazi O., "A sequential coding approach for short length LT codes over AWGN channel," in
4th International Conference on Electrical and Electronic Engineering (ICEEE), IEEE, pp. 267-271, 2017.
[21] Abdulkhaleq N. I., Hasan I. J., Salih N. A., "Mitigation of packet erasure using new fountain code design," In 2018
IEEE International Multidisciplinary Conference on Engineering Technology (IMCET), IEEE, pp. 1-7, 2018.
[22] Hasan I. J., Salih N. A., Abdulkhaleq N. I. "An Android smart application for an Arduino based local
meteorological data recording" 2019 IOP Conf. Ser.: Mater. Sci. Eng. Vol. 518, 2019.
[23] Salih N. A., Hasan I. J., Abdulkhaleq N. I., "Design and implementation of a smart monitoring system for water
quality of fish farms," Indonesian Journal of Electrical Engineering and Computer Science (IJEECS), Vol. 14,
No. 1, pp. 45-52, April 2019.
[24] Leftah H. A., Alminshid H. N.., "Channel capacity and performance evaluation of precoded MIMO-OFDM system
with large-size constellation," Indonesian Journal of Electrical Engineering and Computer Science (IJEECS), Vol.
9, No. 6, pp. 5024∼5030, December 2019.
[25] Leftah H. A., Boussakta, S., "Efficient modulation scheme for OFDM system with ZP and MMSE equalizer," In
2013 IEEE International Conference on Communications (ICC), IEEE, pp. 4703-4707, 2013.