ABSTRACT: The size of DNA (Deoxyribonucleic Acid) sequences is varying in the range of millions to billions of nucleotides and two or three times bigger annually. Therefore efficient lossless compression technique, data structures to efficiently store, access, secure communicate and search these large datasets are necessary. This compression algorithm for genetic sequences, based on searching the exact repeat, reverse and palindrome (R2P) substring substitution and create a Library file. The R2P substring is replaced by corresponding ASCII character where for repeat, selecting ASCII characters ranging from 33 to 33+72, for reverse from 33+73 to 33+73+72 and for palindrome from 179 to 179+72. The selective encryption technique, the data are encrypted either in the library file or in compressed file or in both, also by using ASCII code and online library file acting as a signature. Selective encryption, where a part of message is encrypted keeping the remaining part unencrypted, can be a viable proposition for running encryption system in resource constraint devices. The algorithm can approach a moderate compression rate, provide strong data security, the running time is very few second and the complexity is O(n2 ). Also the compressed data again compressed by renounced compressor for reducing the compression rate & ratio. This techniques can approach a compression rate of 2.004871bits/base.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
An efficient transcoding algorithm for G.723.1 and G.729A ...Videoguy
This document proposes an efficient transcoding algorithm to translate between the G.723.1 and G.729A speech coding standards. The algorithm has four main steps: 1) converting line spectral pair parameters between the two codecs, 2) converting pitch intervals using a smoothing technique, 3) performing a fast adaptive codebook search, and 4) performing a fast fixed codebook search. Objective and subjective tests show the algorithm achieves comparable speech quality to tandem encoding/decoding with 26-38% lower complexity and shorter delay.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes a proposed design for a radix-8 Booth encoded modulo 2n-1 multiplier that can adapt its delay to match the critical path delay of a residue number system (RNS) multiplier. The design represents the hard multiple (3X) in a partially redundant form using a k-bit ripple carry adder to reduce carry propagation length. It also represents other partial products in a partially redundant, biased form. This allows the delay to be controlled by varying k. For a given n, valid values of k exist where the bias can be compensated by a single precomputed constant without affecting correctness.
Survey on Error Control Coding TechniquesIJTET Journal
This document discusses various error control coding techniques used to ensure correct data transmission over noisy channels. It describes automatic repeat request and forward error correction as the two main approaches. Specific coding schemes covered include parity codes, Hamming codes, BCH codes, Reed-Solomon codes, LDPC codes, convolutional codes, and turbo codes. Reed-Solomon codes can correct multiple burst errors with high code rates. LDPC codes provide performance close to the Shannon limit with lower complexity than turbo codes. The document provides an overview of the coding techniques and their encoding and decoding processes.
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...ijcseit
This paper presents the bit error rate (BER) performance of the low density parity check (LDPC) based
space-time trellis coded 2×2 multiple-input multiple-output orthogonal frequency-division multiplexing
(STTC-MIMO-OFDM) system on text message transmission. The system under investigation incorporates
1/2-rated LDPC encoding scheme under various digital modulations (BPSK, QPSK and QAM) over an
additative white gaussian noise (AWGN) and other fading (Raleigh and Rician) channels for two transmit
and two receive antennas. At the receiving section of the simulated system, Minimum Mean-Square-Error
(MMSE) channel equalization technique has been implemented to extract transmitted symbols without
enhancing noise power level. The effectiveness of the proposed system is analyzed in terms of BER with
signal-to-noise ratio (SNR). It is observable from the Matlab based simulation study that the proposed
system outperforms with BPSK as compared to other digital modulation schemes at relatively low SNRs
under AWGN, Rayleigh and Rician fading channels. The transmitted text message is found to have
retrieved effectively at the receiver under implementation of iterative sum-product LDPC decoding
algorithm. It has also been anticipated that the performance of the LDPC-based STTC-MIMO-OFDM
system degrades with the increase of noise power.
The document summarizes a research paper that proposes a low density parity check (LDPC) algorithm for insertion/deletion channels. It begins by providing background on LDPC codes and how they are finding increasing use for reliable information transfer over bandwidth-constrained links with noise. It then describes the specific system model of a concatenated coding scheme using an outer error correcting code and inner marker code. The document analyzes the achievable transmission rates using this scheme through both bit-level and symbol-level maximum a posteriori probability (MAP) detection algorithms. It finds that the symbol-level approach confirms an advantage over codes optimized for additive white Gaussian noise channels.
Analysis of LDPC Codes under Wi-Max IEEE 802.16eIJERD Editor
The LDPC codes have been shown to be by-far the best coding scheme capable for transmitting message over noisy channel. The main aim of this paper is to study the behaviour of LDPC codes on under IEEE 802.16e guidelines. The rate- ½ LDPC codes have been implemented on AWGN channel and the result shows that they can be used on such channels with low BER performance. The BER can be further minimized by increasing the block length.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
An efficient transcoding algorithm for G.723.1 and G.729A ...Videoguy
This document proposes an efficient transcoding algorithm to translate between the G.723.1 and G.729A speech coding standards. The algorithm has four main steps: 1) converting line spectral pair parameters between the two codecs, 2) converting pitch intervals using a smoothing technique, 3) performing a fast adaptive codebook search, and 4) performing a fast fixed codebook search. Objective and subjective tests show the algorithm achieves comparable speech quality to tandem encoding/decoding with 26-38% lower complexity and shorter delay.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes a proposed design for a radix-8 Booth encoded modulo 2n-1 multiplier that can adapt its delay to match the critical path delay of a residue number system (RNS) multiplier. The design represents the hard multiple (3X) in a partially redundant form using a k-bit ripple carry adder to reduce carry propagation length. It also represents other partial products in a partially redundant, biased form. This allows the delay to be controlled by varying k. For a given n, valid values of k exist where the bias can be compensated by a single precomputed constant without affecting correctness.
Survey on Error Control Coding TechniquesIJTET Journal
This document discusses various error control coding techniques used to ensure correct data transmission over noisy channels. It describes automatic repeat request and forward error correction as the two main approaches. Specific coding schemes covered include parity codes, Hamming codes, BCH codes, Reed-Solomon codes, LDPC codes, convolutional codes, and turbo codes. Reed-Solomon codes can correct multiple burst errors with high code rates. LDPC codes provide performance close to the Shannon limit with lower complexity than turbo codes. The document provides an overview of the coding techniques and their encoding and decoding processes.
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...ijcseit
This paper presents the bit error rate (BER) performance of the low density parity check (LDPC) based
space-time trellis coded 2×2 multiple-input multiple-output orthogonal frequency-division multiplexing
(STTC-MIMO-OFDM) system on text message transmission. The system under investigation incorporates
1/2-rated LDPC encoding scheme under various digital modulations (BPSK, QPSK and QAM) over an
additative white gaussian noise (AWGN) and other fading (Raleigh and Rician) channels for two transmit
and two receive antennas. At the receiving section of the simulated system, Minimum Mean-Square-Error
(MMSE) channel equalization technique has been implemented to extract transmitted symbols without
enhancing noise power level. The effectiveness of the proposed system is analyzed in terms of BER with
signal-to-noise ratio (SNR). It is observable from the Matlab based simulation study that the proposed
system outperforms with BPSK as compared to other digital modulation schemes at relatively low SNRs
under AWGN, Rayleigh and Rician fading channels. The transmitted text message is found to have
retrieved effectively at the receiver under implementation of iterative sum-product LDPC decoding
algorithm. It has also been anticipated that the performance of the LDPC-based STTC-MIMO-OFDM
system degrades with the increase of noise power.
The document summarizes a research paper that proposes a low density parity check (LDPC) algorithm for insertion/deletion channels. It begins by providing background on LDPC codes and how they are finding increasing use for reliable information transfer over bandwidth-constrained links with noise. It then describes the specific system model of a concatenated coding scheme using an outer error correcting code and inner marker code. The document analyzes the achievable transmission rates using this scheme through both bit-level and symbol-level maximum a posteriori probability (MAP) detection algorithms. It finds that the symbol-level approach confirms an advantage over codes optimized for additive white Gaussian noise channels.
Analysis of LDPC Codes under Wi-Max IEEE 802.16eIJERD Editor
The LDPC codes have been shown to be by-far the best coding scheme capable for transmitting message over noisy channel. The main aim of this paper is to study the behaviour of LDPC codes on under IEEE 802.16e guidelines. The rate- ½ LDPC codes have been implemented on AWGN channel and the result shows that they can be used on such channels with low BER performance. The BER can be further minimized by increasing the block length.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
The document discusses applications and simulations of error correction coding (ECC) for multicast file transfer. It provides an overview of different ECC and feedback-based multicast protocols and evaluates their performance based on simulations. Reed-Solomon coding on blocks provided faster decoding times than on entire files, while tornado coding had the fastest decoding but required slightly more packets for reconstruction. Simulations of protocols like MFTP and MFTP/EC using network simulators showed that using ECC like Reed-Muller codes significantly improved performance over regular MFTP.
Error correcting coding has become one essential part in nearly all the modern data transmission and
storage systems. Low density parity check (LDPC) codes are a class of linear block code has the superior
performance closer to the Shannon’s limit. In this paper two error correcting codes from the family of
LDPC codes specifically Euclidean Geometry Low Density Parity Check (EG-LDPC) codes and Nonbinary
low density parity check (NB-LDPC) codes are compared in terms of power consumption, number of
iterations and other parameters. For better performance of EG-LDPC codes, Maximum Likelihood (ML)
Algorithm was proposed. NB-LDPC codes can provide better error correcting performance with an
average of 10 to 30 iterations but has high decoding complexity which can be improve by EG-LDPC codes
with ML algorithm having only three iterations for detecting and correcting errors. One step majority logic
decodable (MLD) codes is a subclass of EG-LDPC codes are used to avoid high decoding complexity. The
power Consumed by NB-LDPC codes is 2.729W whereas the power consumed by EG-LDPC codes with ML
algorithm is 1.148W.
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The Reliability in Decoding of Turbo Codes for Wireless CommunicationsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
The document discusses low density parity check (LDPC) codes. It begins with a brief history of LDPC codes, invented by Gallager in 1960 but rediscovered in the 1990s. It then discusses linear block codes and how they can be represented by generator and parity check matrices. The key properties of LDPC codes are described, including their sparse parity check matrix and regular or irregular structure. Decoding of LDPC codes using tanner graphs and hard decision bit flipping algorithms is explained. Finally, some applications of LDPC codes in communication systems and data storage are provided.
This document describes techniques for creating compact and fast n-gram language models. It presents several implementations of n-gram language models that are both small in size and fast to query. The most compact implementation can store the 4 billion n-grams from the Google Web1T corpus in just 23 bits per n-gram, using techniques like implicitly encoding words and applying variable-length encodings to context deltas. It also discusses methods for improving query speed, such as a novel language model caching technique that speeds up both their implementations and SRILM by up to 300%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Watermarking of JPEG2000 Compressed Images with Improved EncryptionEditor IJCATR
The need for copyright protection, ownership verification, and other issues for digital data are getting more and more interest nowadays. Among the solutions for these issues, digital watermarking techniques are used. A range of watermarking methods has been projected. Compression plays a foremost role in the design of watermarking algorithms. For a digital watermarking method to be effective, it is vital that an embedded watermark should be robust against compression. JPEG2000 is a new standard for image compression and transmission. JPEG2000 offers both lossy and lossless compression. The projected approach is used to execute a robust watermarking algorithm to watermark JPEG2000 compressed and encrypted images. For encryption it uses RC6 block cipher. The method embeds watermark in the compressed- encrypted domain and extraction is done in the decrypted domain. The proposal also preserves the confidentiality of substance as the embedding is done on encrypted data. On the whole 3 watermarking schemes are used: Spread Spectrum, Scalar Costa Scheme Quantization Index Modulation, and Rational Dither Modulation.
Lossless LZW Data Compression Algorithm on CUDAIOSR Journals
This document discusses implementing the LZW lossless data compression algorithm on CUDA. It begins with background on data compression techniques, including lossless vs lossy compression. It then provides details on the LZW algorithm, how it works, and its advantages over other dictionary-based algorithms like LZ77 and LZ78. The document discusses how CUDA and GPUs can be used for parallel computing. It reviews related work porting compression algorithms to CUDA. Finally, it proposes implementing LZW on CUDA to take advantage of GPU parallelism for faster compression times compared to CPU implementations. Performance comparisons from previous works porting other algorithms to CUDA suggest GPU implementations can achieve significant speedups.
This document provides an overview of low-density parity-check (LDPC) codes. It discusses Shannon's coding theorem and the evolution of coding technology. LDPC codes were invented by Gallager in 1963 and have simple decoding algorithms that allow them to achieve performance close to the Shannon limit. The document defines regular and irregular LDPC codes using parity check matrices and Tanner graphs. It also discusses code construction, applications of LDPC codes in wireless communications standards, and concludes that LDPC codes are becoming the mainstream in coding technology.
Arabic named entity recognition using deep learning approachIJECEIAES
Most of the Arabic Named Entity Recognition (NER) systems depend massively on external resources and handmade feature engineering to achieve state-of-the-art results. To overcome such limitations, we proposed, in this paper, to use deep learning approach to tackle the Arabic NER task. We introduced a neural network architecture based on bidirectional Long Short-Term Memory (LSTM) and Conditional Random Fields (CRF) and experimented with various commonly used hyperparameters to assess their effect on the overall performance of our system. Our model gets two sources of information about words as input: pre-trained word embeddings and character-based representations and eliminated the need for any task-specific knowledge or feature engineering. We obtained state-of-the-art result on the standard ANERcorp corpus with an F1 score of 90.6%.
This document discusses low-density parity-check (LDPC) codes. It begins with an overview of LDPC codes, noting they were originally invented in the 1960s but gained renewed interest after turbo codes. It then covers LDPC code performance and construction, including generator and parity check matrices. Various representations of LDPC codes are examined, such as matrix and graphical representations using Tanner graphs. Applications of LDPC codes include wireless, wired, and optical communications. In conclusions, turbo codes achieved theoretical limits with a small gap and led to new codes like LDPC codes, which provide high-speed and high-throughput performance close to the Shannon limit.
REDUCED COMPLEXITY QUASI-CYCLIC LDPC ENCODER FOR IEEE 802.11N VLSICS Design
In this paper, we present a low complexity Quasi-cyclic -low-density-parity-check (QC-LDPC) encoder hardware based on Richardson and Urbanke lower- triangular algorithm for IEEE 802.11n wireless LAN Standard for 648 block length and 1/2 code rate. The LDPC encoder hardware implementation works at 301.433MHz and it can process 12.12 Gbps throughput. We apply the concept of multiplication by constant matrices in GF(2) due to which hardware required is also optimized. Proposed architecture of QC-LDPC encoder will be compatible for high-speed applications. This hardwired architecture is less
complex as it avoids conventionally used block memories and cyclic-shifters.
A MULTI-LAYER HYBRID TEXT STEGANOGRAPHY FOR SECRET COMMUNICATION USING WORD T...IJNSA Journal
This paper introduces a multi-layer hybrid text steganography approach by utilizing word tagging and recoloring. Existing approaches are planned to be either progressive in getting imperceptibility, or high hiding limit, or robustness. The proposed approach does not use the ordinary sequential inserting process and overcome issues of the current approaches by taking a careful of getting imperceptibility, high hiding limit, and robustness through its hybrid work by using a linguistic technique and a format-based technique. The linguistic technique is used to divide the cover text into embedding layers where each layer consists of a sequence of words that has a single part of speech detected by POS tagger, while the format-based technique is used to recolor the letters of a cover text with a near RGB color coding to embed 12 bits from the secret message in each letter which leads to high hidden capacity and blinds the embedding, moreover, the robustness is accomplished through a multi-layer embedding process, and the generated stego key significantly assists the security of the embedding messages and its size. The experimental results comparison shows that the purpose approach is better than currently developed approaches in providing an ideal balance between imperceptibility, high hiding limit, and robustness criteria.
Low power ldpc decoder implementation using layer decodingajithc0003
This document proposes a low-power LDPC decoder implementation using layered decoding. It discusses how LDPC codes can be used for reliable data transmission and are finding increasing use. It describes layered decoding as an efficient approach that can decrease power consumption. The proposed method is vectored layer decoding, which overcomes limitations of traditional layered decoding. It involves encoding data using an LDPC generator matrix derived from the parity check matrix. Simulations were conducted to generate the generator matrix and encode data. The goal is to efficiently implement a low-power LDPC decoder using this vectored layer decoding approach.
LDPC (Low Density Parity Check) codes were first reported in 1962 by Gallager as a method of channel coding to defeat noise using low density parity check matrices. LDPC codes were rediscovered in 1996 by MacKay and Neal, who showed they could achieve performance close to Turbo Codes but with lower implementation costs. LDPC codes work by encoding data through matrix operations and decoding by finding the most probable vector that satisfies the parity check equation, with many possible decoding implementations. They have applications in wireless, wired, and optical communications where the code must be designed to balance highest coding gain with ease of implementation based on the throughput requirements and modulation scheme used.
Performance analysis and implementation for nonbinary quasi cyclic ldpc decod...ijwmn
Non-binary low-density parity check (NB-LDPC) codes are an extension of binary LDPC codes with
significantly better performance. Although various kinds of low-complexity iterative decoding algorithms
have been proposed, there is a big challenge for VLSI implementation of NBLDPC decoders due to its high
complexity and long latency. In this brief, highly efficient check node processing scheme, which the
processing delay greatly reduced, including Min-Max decoding algorithm and check node unit are
proposed. Compare with previous works, less than 52% could be reduced for the latency of check node
unit. In addition, the efficiency of the presented techniques is design to demonstrate for the (620, 310) NBQC-
LDPC decoder.
In this paper, we apply grammar-based pre-processing prior to using the Prediction by Partial Matching
(PPM) compression algorithm. This achieves significantly better compression for different natural
language texts compared to other well-known compression methods. Our method first generates a grammar
based on the most common two-character sequences (bigraphs) or three-character sequences (trigraphs) in
the text being compressed and then substitutes these sequences using the respective non-terminal symbols
defined by the grammar in a pre-processing phase prior to the compression. This leads to significantly
improved results in compression for various natural languages (a 5% improvement for American English,
10% for British English, 29% for Welsh, 10% for Arabic, 3% for Persian and 35% for Chinese). We
describe further improvements using a two pass scheme where the grammar-based pre-processing is
applied again in a second pass through the text. We then apply the algorithms to the files in the Calgary
Corpus and also achieve significantly improved results in compression, between 11% and 20%, when
compared with other compression algorithms, including a grammar-based approach, the Sequitur
algorithm.
This document analyzes a hybrid cipher algorithm created by combining the TripleDES block cipher with the RC4 stream cipher. It describes the encryption and decryption processes of the hybrid cipher. Performance and secrecy of the hybrid cipher, TripleDES, and RC4 are evaluated by measuring encryption time and calculating secrecy values using Shannon's information theory for different sizes of input data ranging from 5KB to 100KB. Results show the hybrid cipher has comparable encryption time to TripleDES and more stable secrecy values than TripleDES or RC4.
IRJET- A Novel Hybrid Security System for OFDM-PON using Highly Improved RC6 ...IRJET Journal
This document proposes a novel hybrid security system for OFDM-PON using a highly improved RC6 algorithm and scrambling technique. The data is first scrambled using Morlet wavelet transform for spatial domain scrambling without data loss. It is then encrypted using a highly improved RC6 algorithm for enhanced security. By combining scrambling and encryption, a two-tier security approach is implemented with low complexity. Simulation results show low bit and symbol error rates, demonstrating improved system performance. The proposed scheme provides enhanced security for OFDM-PON networks with potential applications in secure optical communications.
A Modified Technique For Performing Data Encryption & Data DecryptionIJERA Editor
In this age of universal electronic connectivity of viruses and hackers of electronic eavesdropping and electronic fraud, there is indeed needed to store the information securely. This, in turn, led to a heightened awareness to protect data and resources from disclosure, to guarantee the authenticity of data and messages and to protect systems from network-based attacks. Information security via encryption decryption techniques is a very popular research area for many people’s over the years. This paper elaborates the basic concept of the cryptography, specially public and private cryptography. It also contains a review of some popular encryption decryption algorithms. A modified method is also proposed. This method is fast in comparison to the existing methods.
The document discusses applications and simulations of error correction coding (ECC) for multicast file transfer. It provides an overview of different ECC and feedback-based multicast protocols and evaluates their performance based on simulations. Reed-Solomon coding on blocks provided faster decoding times than on entire files, while tornado coding had the fastest decoding but required slightly more packets for reconstruction. Simulations of protocols like MFTP and MFTP/EC using network simulators showed that using ECC like Reed-Muller codes significantly improved performance over regular MFTP.
Error correcting coding has become one essential part in nearly all the modern data transmission and
storage systems. Low density parity check (LDPC) codes are a class of linear block code has the superior
performance closer to the Shannon’s limit. In this paper two error correcting codes from the family of
LDPC codes specifically Euclidean Geometry Low Density Parity Check (EG-LDPC) codes and Nonbinary
low density parity check (NB-LDPC) codes are compared in terms of power consumption, number of
iterations and other parameters. For better performance of EG-LDPC codes, Maximum Likelihood (ML)
Algorithm was proposed. NB-LDPC codes can provide better error correcting performance with an
average of 10 to 30 iterations but has high decoding complexity which can be improve by EG-LDPC codes
with ML algorithm having only three iterations for detecting and correcting errors. One step majority logic
decodable (MLD) codes is a subclass of EG-LDPC codes are used to avoid high decoding complexity. The
power Consumed by NB-LDPC codes is 2.729W whereas the power consumed by EG-LDPC codes with ML
algorithm is 1.148W.
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The Reliability in Decoding of Turbo Codes for Wireless CommunicationsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
The document discusses low density parity check (LDPC) codes. It begins with a brief history of LDPC codes, invented by Gallager in 1960 but rediscovered in the 1990s. It then discusses linear block codes and how they can be represented by generator and parity check matrices. The key properties of LDPC codes are described, including their sparse parity check matrix and regular or irregular structure. Decoding of LDPC codes using tanner graphs and hard decision bit flipping algorithms is explained. Finally, some applications of LDPC codes in communication systems and data storage are provided.
This document describes techniques for creating compact and fast n-gram language models. It presents several implementations of n-gram language models that are both small in size and fast to query. The most compact implementation can store the 4 billion n-grams from the Google Web1T corpus in just 23 bits per n-gram, using techniques like implicitly encoding words and applying variable-length encodings to context deltas. It also discusses methods for improving query speed, such as a novel language model caching technique that speeds up both their implementations and SRILM by up to 300%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Watermarking of JPEG2000 Compressed Images with Improved EncryptionEditor IJCATR
The need for copyright protection, ownership verification, and other issues for digital data are getting more and more interest nowadays. Among the solutions for these issues, digital watermarking techniques are used. A range of watermarking methods has been projected. Compression plays a foremost role in the design of watermarking algorithms. For a digital watermarking method to be effective, it is vital that an embedded watermark should be robust against compression. JPEG2000 is a new standard for image compression and transmission. JPEG2000 offers both lossy and lossless compression. The projected approach is used to execute a robust watermarking algorithm to watermark JPEG2000 compressed and encrypted images. For encryption it uses RC6 block cipher. The method embeds watermark in the compressed- encrypted domain and extraction is done in the decrypted domain. The proposal also preserves the confidentiality of substance as the embedding is done on encrypted data. On the whole 3 watermarking schemes are used: Spread Spectrum, Scalar Costa Scheme Quantization Index Modulation, and Rational Dither Modulation.
Lossless LZW Data Compression Algorithm on CUDAIOSR Journals
This document discusses implementing the LZW lossless data compression algorithm on CUDA. It begins with background on data compression techniques, including lossless vs lossy compression. It then provides details on the LZW algorithm, how it works, and its advantages over other dictionary-based algorithms like LZ77 and LZ78. The document discusses how CUDA and GPUs can be used for parallel computing. It reviews related work porting compression algorithms to CUDA. Finally, it proposes implementing LZW on CUDA to take advantage of GPU parallelism for faster compression times compared to CPU implementations. Performance comparisons from previous works porting other algorithms to CUDA suggest GPU implementations can achieve significant speedups.
This document provides an overview of low-density parity-check (LDPC) codes. It discusses Shannon's coding theorem and the evolution of coding technology. LDPC codes were invented by Gallager in 1963 and have simple decoding algorithms that allow them to achieve performance close to the Shannon limit. The document defines regular and irregular LDPC codes using parity check matrices and Tanner graphs. It also discusses code construction, applications of LDPC codes in wireless communications standards, and concludes that LDPC codes are becoming the mainstream in coding technology.
Arabic named entity recognition using deep learning approachIJECEIAES
Most of the Arabic Named Entity Recognition (NER) systems depend massively on external resources and handmade feature engineering to achieve state-of-the-art results. To overcome such limitations, we proposed, in this paper, to use deep learning approach to tackle the Arabic NER task. We introduced a neural network architecture based on bidirectional Long Short-Term Memory (LSTM) and Conditional Random Fields (CRF) and experimented with various commonly used hyperparameters to assess their effect on the overall performance of our system. Our model gets two sources of information about words as input: pre-trained word embeddings and character-based representations and eliminated the need for any task-specific knowledge or feature engineering. We obtained state-of-the-art result on the standard ANERcorp corpus with an F1 score of 90.6%.
This document discusses low-density parity-check (LDPC) codes. It begins with an overview of LDPC codes, noting they were originally invented in the 1960s but gained renewed interest after turbo codes. It then covers LDPC code performance and construction, including generator and parity check matrices. Various representations of LDPC codes are examined, such as matrix and graphical representations using Tanner graphs. Applications of LDPC codes include wireless, wired, and optical communications. In conclusions, turbo codes achieved theoretical limits with a small gap and led to new codes like LDPC codes, which provide high-speed and high-throughput performance close to the Shannon limit.
REDUCED COMPLEXITY QUASI-CYCLIC LDPC ENCODER FOR IEEE 802.11N VLSICS Design
In this paper, we present a low complexity Quasi-cyclic -low-density-parity-check (QC-LDPC) encoder hardware based on Richardson and Urbanke lower- triangular algorithm for IEEE 802.11n wireless LAN Standard for 648 block length and 1/2 code rate. The LDPC encoder hardware implementation works at 301.433MHz and it can process 12.12 Gbps throughput. We apply the concept of multiplication by constant matrices in GF(2) due to which hardware required is also optimized. Proposed architecture of QC-LDPC encoder will be compatible for high-speed applications. This hardwired architecture is less
complex as it avoids conventionally used block memories and cyclic-shifters.
A MULTI-LAYER HYBRID TEXT STEGANOGRAPHY FOR SECRET COMMUNICATION USING WORD T...IJNSA Journal
This paper introduces a multi-layer hybrid text steganography approach by utilizing word tagging and recoloring. Existing approaches are planned to be either progressive in getting imperceptibility, or high hiding limit, or robustness. The proposed approach does not use the ordinary sequential inserting process and overcome issues of the current approaches by taking a careful of getting imperceptibility, high hiding limit, and robustness through its hybrid work by using a linguistic technique and a format-based technique. The linguistic technique is used to divide the cover text into embedding layers where each layer consists of a sequence of words that has a single part of speech detected by POS tagger, while the format-based technique is used to recolor the letters of a cover text with a near RGB color coding to embed 12 bits from the secret message in each letter which leads to high hidden capacity and blinds the embedding, moreover, the robustness is accomplished through a multi-layer embedding process, and the generated stego key significantly assists the security of the embedding messages and its size. The experimental results comparison shows that the purpose approach is better than currently developed approaches in providing an ideal balance between imperceptibility, high hiding limit, and robustness criteria.
Low power ldpc decoder implementation using layer decodingajithc0003
This document proposes a low-power LDPC decoder implementation using layered decoding. It discusses how LDPC codes can be used for reliable data transmission and are finding increasing use. It describes layered decoding as an efficient approach that can decrease power consumption. The proposed method is vectored layer decoding, which overcomes limitations of traditional layered decoding. It involves encoding data using an LDPC generator matrix derived from the parity check matrix. Simulations were conducted to generate the generator matrix and encode data. The goal is to efficiently implement a low-power LDPC decoder using this vectored layer decoding approach.
LDPC (Low Density Parity Check) codes were first reported in 1962 by Gallager as a method of channel coding to defeat noise using low density parity check matrices. LDPC codes were rediscovered in 1996 by MacKay and Neal, who showed they could achieve performance close to Turbo Codes but with lower implementation costs. LDPC codes work by encoding data through matrix operations and decoding by finding the most probable vector that satisfies the parity check equation, with many possible decoding implementations. They have applications in wireless, wired, and optical communications where the code must be designed to balance highest coding gain with ease of implementation based on the throughput requirements and modulation scheme used.
Performance analysis and implementation for nonbinary quasi cyclic ldpc decod...ijwmn
Non-binary low-density parity check (NB-LDPC) codes are an extension of binary LDPC codes with
significantly better performance. Although various kinds of low-complexity iterative decoding algorithms
have been proposed, there is a big challenge for VLSI implementation of NBLDPC decoders due to its high
complexity and long latency. In this brief, highly efficient check node processing scheme, which the
processing delay greatly reduced, including Min-Max decoding algorithm and check node unit are
proposed. Compare with previous works, less than 52% could be reduced for the latency of check node
unit. In addition, the efficiency of the presented techniques is design to demonstrate for the (620, 310) NBQC-
LDPC decoder.
In this paper, we apply grammar-based pre-processing prior to using the Prediction by Partial Matching
(PPM) compression algorithm. This achieves significantly better compression for different natural
language texts compared to other well-known compression methods. Our method first generates a grammar
based on the most common two-character sequences (bigraphs) or three-character sequences (trigraphs) in
the text being compressed and then substitutes these sequences using the respective non-terminal symbols
defined by the grammar in a pre-processing phase prior to the compression. This leads to significantly
improved results in compression for various natural languages (a 5% improvement for American English,
10% for British English, 29% for Welsh, 10% for Arabic, 3% for Persian and 35% for Chinese). We
describe further improvements using a two pass scheme where the grammar-based pre-processing is
applied again in a second pass through the text. We then apply the algorithms to the files in the Calgary
Corpus and also achieve significantly improved results in compression, between 11% and 20%, when
compared with other compression algorithms, including a grammar-based approach, the Sequitur
algorithm.
This document analyzes a hybrid cipher algorithm created by combining the TripleDES block cipher with the RC4 stream cipher. It describes the encryption and decryption processes of the hybrid cipher. Performance and secrecy of the hybrid cipher, TripleDES, and RC4 are evaluated by measuring encryption time and calculating secrecy values using Shannon's information theory for different sizes of input data ranging from 5KB to 100KB. Results show the hybrid cipher has comparable encryption time to TripleDES and more stable secrecy values than TripleDES or RC4.
IRJET- A Novel Hybrid Security System for OFDM-PON using Highly Improved RC6 ...IRJET Journal
This document proposes a novel hybrid security system for OFDM-PON using a highly improved RC6 algorithm and scrambling technique. The data is first scrambled using Morlet wavelet transform for spatial domain scrambling without data loss. It is then encrypted using a highly improved RC6 algorithm for enhanced security. By combining scrambling and encryption, a two-tier security approach is implemented with low complexity. Simulation results show low bit and symbol error rates, demonstrating improved system performance. The proposed scheme provides enhanced security for OFDM-PON networks with potential applications in secure optical communications.
A Modified Technique For Performing Data Encryption & Data DecryptionIJERA Editor
In this age of universal electronic connectivity of viruses and hackers of electronic eavesdropping and electronic fraud, there is indeed needed to store the information securely. This, in turn, led to a heightened awareness to protect data and resources from disclosure, to guarantee the authenticity of data and messages and to protect systems from network-based attacks. Information security via encryption decryption techniques is a very popular research area for many people’s over the years. This paper elaborates the basic concept of the cryptography, specially public and private cryptography. It also contains a review of some popular encryption decryption algorithms. A modified method is also proposed. This method is fast in comparison to the existing methods.
Secrecy and Performance Analysis of Symmetric Key Encryption AlgorithmsTharindu Weerasinghe
The document analyzes the secrecy and performance of symmetric key encryption algorithms including block ciphers (DES, TripleDES, AES), stream ciphers (RC2, RC4) and hybrid algorithms combining block and stream ciphers (TripleDES+RC4, AES+RC4). The analysis is conducted based on two measurement criteria (secrecy of ciphers and encryption time) under two circumstances (variable input plaintext size and variable input plaintext length representing passwords). Results are presented in a table showing average secrecy values for each algorithm over varying input data sizes. The tool created allows users to select an algorithm and see corresponding performance and secrecy results.
IRJET-Block-Level Message Encryption for Secure Large File to Avoid De-Duplic...IRJET Journal
The document proposes a new Block-Level Message Encryption (BL-MLE) approach to achieve more efficient deduplication of encrypted large files in cloud storage. BL-MLE can achieve file-level and block-level deduplication, block key management, and proof of ownership simultaneously using a small set of metadata. It describes the BL-MLE method and how it uses AES, RSA, and SHA algorithms for encryption, authentication, and integrity checks. The experimental results show the upload of an encrypted document by the owner, user registration and access of the encrypted file using decryption keys, and the auditor performing integrity checks and generating audit reports.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Performance Analysis of DRA Based OFDM Data Transmission With Respect to Nove...IJERA Editor
In this paper, we have analyzed the performance characteristics of OFDM data transmission with regard to a new high speed RS decoding algorithm. The various characteristics identified are mainly speed and accuracy of the transmission irrespective of channel behaviour. We consider two cases viz. data transmission without error control and data transmission with error control. Each of these cases are duly analyzed and it is proven that high speed RS decoding algorithms can actually benefit OFDM data transmission for advanced communication systems only if implemented at the hardware (VLSI) level because of significant processing overhead involved in software based implementation even though the algorithm may have lower computational complexity.
Nowadays, the demands of real-time video communication are increased rapidly. Search and rescue (SAR) applications like earthquake rescue, avalanche victims, wildfire monitoring in addition to highway surveillance are considered examples of real-time applications. In which, communication time is considered the most important metric to be optimized to ensure support for victims lives. Thus finding a simple and time efficient encryption technique for securing the transmitted data become mandatory. In this paper, we present an efficient encryption technique which has low computation complexity, low processing time and highly chaotic encrypted videos. The proposed technique is based on CABAC where the bin-string of Intra-Prediction Mode is encrypted with chaotic signals and the sign of MVD is toggled randomly. For residue coefficients the sign of the AC coefficients are flipped randomly and the first value of DC coefficients is encrypted by XORing the bin-string with random stream. All random streams are generated with chaotic systems using Logistic map. The experimental results shows that the proposed technique is highly effective for real-time application and robust against different types of attacks.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
This document proposes a combined ciphering and coding scheme for image transmission over noisy wireless channels. The ciphering scheme is based on a modified chaotic Logistic map to provide security. Low Density Parity Check (LDPC) coding is used as the error correction technique to improve reliability over noisy channels. Simulation results show that LDPC coding outperforms turbo coding for this application in terms of bit error rate. The combined ciphering and coding scheme is analyzed and shown to enhance performance metrics like peak signal-to-noise ratio and bit error rate, achieving both security and reliability for image transmission over wireless channels.
Lossless Data Compression Using Rice Algorithm Based On Curve Fitting TechniqueIRJET Journal
This document describes a lossless data compression technique called the Rice algorithm and a modification of the Rice algorithm using curve fitting. The Rice algorithm uses a two-stage approach of preprocessing data to decorrelate it followed by entropy coding to assign variable-length codes. The modified Rice algorithm uses curve fitting to generate a function to predict future data points for improved compression performance compared to the original Rice algorithm. Simulation results show the modified algorithm achieves better compression by selecting an encoding option that uses fewer bits than the original Rice algorithm on a sample data set.
1. The document presents an audio watermarking algorithm based on discrete wavelet transform and least significant bit insertion (DWT-LSB). The DWT decomposes the audio signal into sub-bands to identify locations for embedding watermark bits. The watermark bits are embedded in the high-resolution sub-band using LSB insertion.
2. Testing showed SNR values between 53.7-59.6 dB for different audio signals, indicating imperceptibility of the embedded watermark. The DWT-LSB method provides satisfactory robustness and imperceptibility for audio watermarking.
3. Digital watermarking techniques like the one proposed can help protect copyright of digital audio by embedding invisible signatures as
This document compares different watermarking schemes (Spread Spectrum, Scalar Costa scheme Quantization Index Modulation, and Rational Dither Modulation) and encryption techniques (RC4, RC5, RC6) for securing JPEG2000 images. It finds that RC5 is more secure than RC4 due to more rounds, and RC6 is used instead of RC5 due to greater use of registers. The watermarking schemes provide robustness while the encryption techniques secure the images for copyright protection and content authentication. Peak signal-to-noise ratio and mean square error are compared for different watermarked images.
Comparative Study of Watermarking and Encryption Schemes for JPEG2000 ImagesSebastin Antony Joe
This document compares different watermarking and encryption schemes for securing JPEG2000 images. It summarizes three watermarking schemes (Spread Spectrum, Scalar Costa scheme Quantization Index Modulation, and Rational Dither Modulation) and three encryption techniques (RC4, RC5, RC6). It analyzes the robustness of these schemes against various attacks when used with the watermarking systems for JPEG2000 images. The document also discusses the technical background of JPEG2000 compression, the encryption and watermarking algorithms, and analyzes the security and robustness of the encryption and watermarking schemes. It concludes that various combinations of the discussed watermarking and encryption schemes can be used for applications requiring copyright protection and content authentication of
This document discusses DNA cryptography as a new approach for secure data transmission. It begins with an abstract that outlines DNA cryptography and its use of DNA strands to hide information. The introduction discusses how cryptography is used to protect sensitive data and describes DNA cryptography as a new innovative approach. It then provides details on DNA and the basic concepts of DNA cryptography, including how DNA base pairs are used to represent binary data. The document compares traditional cryptography techniques to DNA cryptography and surveys several works that have been done in the field of DNA cryptography, demonstrating its potential advantages over traditional approaches.
NEW ALGORITHM FOR WIRELESS NETWORK COMMUNICATION SECURITYijcisjournal
This paper evaluates the security of wireless communication network based on the fuzzy logic in Mat lab. A new algorithm is proposed and evaluated which is the hybrid algorithm. We highlight the valuable assets in designing of wireless network communication system based on network simulator (NS2), which is crucial to protect security of the systems. Block cipher algorithms are evaluated by using fuzzy logics and a hybrid
algorithm is proposed. Both algorithms are evaluated in term of the security level. Logic (AND) is used in the rules of modelling and Mamdani Style is used for the evaluations
Color Cryptography using Substitution Methodijtsrd
In world of computer network, fears come in many different forms. Some of the most common fears today are software attacks. If we want to secure any type of data then we can use encryption method. All traditional encryption methods use substitution and switch. Substitution methods map plain text into ciphertext in which characters, numbers and special symbols are substituted with other characters, numbers and special symbols. In this paper, we are using a creative cryptographic replacement method is to generate a stronger cipher than the existing replacement algorithms. This method focuses on the replacement of characters, numbers and special symbols with color blocks. This algorithm of substitution is based on Play Color Cipher. Yashvanth. L | Dr. N. Shanmugapriya "Color Cryptography using Substitution Method" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29360.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/29360/color-cryptography-using-substitution-method/yashvanth-l
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document proposes an adaptive steganography technique for hiding secret data in digital images. The technique uses variable bit length embedding based on wavelet coefficients of the cover image. A logistic map is used to generate a secret key, which determines the RGB color planes and blocks used for data hiding. Wavelet coefficients are classified into ranges, and the number of bits hidden corresponds to the coefficient value range. Extraction involves selecting the same coefficients based on the key and calculating the hidden bits. The technique aims to improve security, capacity and imperceptibility compared to existing constant bit length methods. Evaluation shows the proposed method provides good security since variable bits are hidden randomly using the secret key.
Hamming net based Low Complexity Successive Cancellation Polar DecoderRSIS International
This paper aims to implement hybrid based Polar
encoder using the knowledge of mutual information and channel
capacity. Further a Hamming weight successive cancellation
decoder is simulated with QPSK modulation technique in
presence of additive white gaussian noise. The experimentation
performed with the effect of channel polarization has shown that
for 256- bit data stream, 30% channels has zero bit and 49%
channels are with a one bit capacity. The decoding complexity is
reduced to almost half as compared to conventional successive
cancellation decoding algorithm. However, the required SNR of
7 dB is achieved at the targeted BER of 10 -4. The penalty paid is
in terms of training time required at the decoding end.
Similar to A Compression & Encryption Algorithms on DNA Sequences Using R 2 P & Selective Technique (20)
Modeling And Simulation Swash Plate Pump Response Characteristics in Load Sen...IJMERJOURNAL
ABSTRACT: Fluid Power is widely employed in applications required high loads such as tractors, cranes, and airplanes. In load sensing hydraulic systems, loads are controlled by adjusting a pump-valve arrangement. In this paper, the swash plate pump hydraulic characteristics will be determined, the pump and its fluid gains will be derived to obtain the pump overall transfer function. Firstly, the swash plate pump mechanism is analyzed and its dynamic model is constructed; the pump pressure and flow rate are plotted and the possible improvement is introduced. The load sensing unit parameters such as orifice width, orifice area, maximum passage area, and piston area at X and Y will be examined to identify their influence on the pump characteristics; and the optimum parameters will be introduced. All results are developed and simulated numerically.
Generation of Electricity Through A Non-Municipal Solid Waste Heat From An In...IJMERJOURNAL
ABSTRACT: Energy production, waste disposal, and pollution minimization are key problems that must be addressed for sustainable cities of the environment. Waste management has become a major concern worldwide, and incineration is now being used increasingly to treat waste that cannot be recycled economically. The total heat content of non- municipal waste varies from countries to countries. The tonnage of generation in Nigeria is expected to soar over the next few years and the exploitation of this renewable energy locked up in urban solid municipal waste into grid energy can be taken advantage off.The heat generated from this incinerated plant can be used to generate electricity which will reduce overdependence on fossil fuel and the use of generator which in turn reduces pollution disposal of this waste is incinerated plant for the production of electricity. Hence, this paper intends to review the nonmunicipal waste potential in Nigeria, evaluate its environment and economic cost, and energy content of municipal solid waste deposits in Nigeria.
A New Two-Dimensional Analytical Model of Small Geometry GaAs MESFETIJMERJOURNAL
ABSTRACT : In this paper, a simple and exact analytical model for Small Geometry GaAs MESFET is developed to determine the potential distribution along the channel of the device. The model is based on the exact solution of two-dimensional Poisson’s equation in the depletion region under the gate. Then the obtained model is used to study the channel potential and threshold voltage of the device. Using the analytical model, the effect of the device parameter and bias conditions on performance of the device is investigated. The obtained results are graphically exhibited and discussed. In order to verification of the analytical results, TCAD device simulator is used and good accordance is observed.
Design a WSN Control System for Filter Backwashing ProcessIJMERJOURNAL
ABSTRACT: Day by day, there is a higher rate of need for accurate automation system to be used in industries and environment monitoring and control. In water treatment plants during the filtration phase, there is a process called backwashing, which particles suspended in the filter basin are removed. in this process the water are forcedly pumped through the filter in upward direction at enough speed to expand the filter media. Therefore, various types of valves used, which are opened and closed in a time sequencing manner. The paper proposed an automation control system for the backwashing process to be initiated and completed automatically using PLC, level sensors and valves installed inside the filter basin. Practically, a control system has been applied which all valves are opened and closed according to wireless signals coming from PLCs on its suitable time. In summary, the control in the process demonstrated that the proposed system is efficient, effective, and able to be reliable. Besides, the results increase the productivity at a low-cost mode.
Application of Customer Relationship Management (Crm) Dimensions: A Critical ...IJMERJOURNAL
ABSTRACT”: Customer relationship management is crucial due to the competitive environment. For this reason, it has become a widely implemented strategy across the hotel industry in retaining customers and maintaining good relationships with them. It also promotes customer satisfaction and loyalty which lead to the achievement of competitive business performance. However, due to the ever-increasing competition and the continuous changing customers’ needs in the hotel industry, the ability to achieve customer satisfaction is becoming a major challenge. Using 40 respondents from 20 hotels, this paper, therefore, explores the managers’ perspective into how the application of CRM dimensions impact on the performance of hotels and also examines the relationships between them since studies evaluating their relationships are limited. It studies CRM from the hotels’ perceptive as Guest Services Managers and Marketing Managers were the respondents. This study employed an exploratory research design and quantitative technique. The survey was conducted in New Delhi and simple random sampling was the sampling technique used to select respondents. With the study’s objectives in mind, the developed research hypotheses constructed were tested using multiple regression analysis as the statistical tool. Through the results, it has been revealed that CRM dimensions are positively related to the performance of hotels. Finally, CRM dimensions are highly recommended as a competitive strategic tool to enhance competitiveness.
Comparisons of Shallow Foundations in Different Soil ConditionIJMERJOURNAL
ABSTRACT: Soil is considered by the engineer as a complex material produced by weathering of the solid rock. Footings are structural elements that transmit column or wall loads to the underlying soil below the structure. Footings are designed to transmit these loads to the soil without exceeding its safe bearing capacity. Each building demands the need to solve a problem of foundation on different types of soil. The main aim of this project is to design the appropriate foundation as per size and shape on cohesive, non-cohesive and rocky soil. In this paper different foundation are studied for a middle side and corner column of a building with different bearing capacities. Based on the study and judicial judgment the type of foundation is decided as per depth, quantity of steel and quantity of concrete and try to find which shape of the foundation is more stable, economical and ways to reduce the ease of construction of the building
Place of Power Sector in Public-Private Partnership: A Veritable Tool to Prom...IJMERJOURNAL
ABSTRACT: Public Private Partnership involves private sector engagement in infrastructural development. Though in the past, the country infrastructure had been experiencing a decline in the system, this is because, government had been the sole contributor to infrastructural finance and had often taken responsibility for implementation, operations and maintenance as well. This decline in the system is caused by escalating population growth depending on available infrastructure, decaying of existing power infrastructure, political instability and corruption in the system. The ongoing reform is about bringing the system to a lime light. Hence, Public Private Partnership participation in the infrastructural development in Nigeria, will create favorable environment for an investors, provide job opportunities, long time policy, decision making and efficient use of the available resources. This paper therefore dwells on overview of the public private partnership with regards to energy and other infrastructural development of Nigeria. Challenges of the partnership and possible solutions towards subduing the problems are proffered.
Study of Part Feeding System for Optimization in Fms & Force Analysis Using M...IJMERJOURNAL
ABSTRACT: This paper describes the development of a flexible and vibratory bowl feeding system which is suitable for use in a flexible manufacturing system. The vibratory bowl feeder for automatic assembly, presents a geometric model of the feeder, and develops force analysis, leading to dynamical modeling of the vibratory feeder. Based on the leaf-spring modeling of the three legs of the symmetrically arranged bowl of the feeder, and equating the vibratory feeder to a three-legged parallel mechanism, the paper reveals the geometric property of the feeder. The effects of the leaf-spring legs are transformed to forces and moments acting on the base and bowl of the feeder. Resultant forces are obtained based upon the coordinate transformation, and the moment analysis is produced based upon the orthogonality of the orientation matrix. This reveals the characteristics of the feeder, that the resultant force is along the z-axis and the resultant moment is about the z direction and further generates the closed-form motion equation.
Investigating The Performance of A Steam Power PlantIJMERJOURNAL
ABSTRACT: The performance analysis of Shobra El-Khima power plant in Cairo, Egypt is presented based on energy and exergy analysis to determine the causes , the sites with high exergy destruction , losses and the possibilities of improving the plant performance. The performance of the plant was evaluated at different loads (Full, 75% and, 50 %). The calculated thermal efficiency based on the heat added to the steam was found to be 41.9 %, 41.7 %, 43.9% , while the exergetic efficiency of the power cycle was found to be 44.8%, 45.5% and 48.8% at max, 75% and, 50 % load respectively. The condenser was found to have the largest energy losses where (54.3%, 55.1% and 56.3% at max, 75% and, 50 % load respectively) of the added energy to the steam is lost to the environment. The maximum exergy destruction was found to be in the turbine where the percentage of the exergy destruction was found to be (42%, 59% and 46.1% at max, 75% and, 50 % load respectively). The pump was found to have the minimum exergy destruction. It was also found that the exergy destruction in feed water heaters and in the condenser together represents the maximum exergy destruction in the plant (about 52%). This means that the irreversibilities in the heat transfer devices in the plant have a significant role on the exergy destruction. So, it is thought that the improvement in the power plant will be limited due to the heat transfer devices.
Study of Time Reduction in Manufacturing of Screws Used in Twin Screw PumpIJMERJOURNAL
ABSTRACT: This paper gives the characteristics of Time reduction in manufacturing of screws for Twin screw pumps. Screws are playing a vital role in the performance of pumps, because pumps give the fluids transfer rate with the help of screws. There is a gap in screws which shows its positiveness. This indicates that we are studying about positive displacements pumps. Positive displacements pumps having no point of contact between screws, because of that there will be no any friction formation. Automation is best for development of product to reduce time in manufacturing of any product. In this paper we also tried to explain this feature of Automation to help reduction of time to manufacture of product to increase productivity.
Mitigation of Voltage Imbalance in A Two Feeder Distribution System Using IupqcIJMERJOURNAL
ABSTRACT: Proliferation of electronic equipment in commercial and industrial processes has resulted in increasingly sensitive electrical loads to be fed from power distribution system which introduce contamination to voltage and current waveforms at the point of common coupling of industrial loads. The unified power quality conditioner (UPQC) is connected between two different feeders (lines), hence this method of connection of the UPQC is called as Interline UPQC (IUPQC).This paper proposes a new connection for a UPQC to improve the power quality of two feeders in a distribution system. Interline Unified Power Quality Conditioner (IUPQC), specifically aims at the integration of series VSC and Shunt VSC to provide high quality power supply by means of voltage sag/swell compensation, harmonic elimination in a power distribution network, so that improved power quality can be made available at the point of common coupling. The structure, control and capability of the IUPQC are discussed in this paper. The efficiency of the proposed configuration has been verified through simulation using MATLAB/ SIMULINK.
Adsorption of Methylene Blue From Aqueous Solution with Vermicompost Produced...IJMERJOURNAL
ABSTRACT: The removal of Methylene blue as a synthetic dye from aquatic system was investigated by using vermicompost. The dye concentration, contact time and pH of the solution carried out in the adsorption studies. Batch adsorption experimental data were suitable for the Langmuir isotherm and a very good fit to the second order kinetic model (pH=10). The maximum adsorption capacity calculated 256.66 mg g-1 . Vermicompost and the dye loaded vermicompost were characterized by SEM and FTIR. It was found that the vermicompost is stable without losing their activity.
Analytical Solutions of simultaneous Linear Differential Equations in Chemica...IJMERJOURNAL
ABSTRACT: Analytical method for solving homogeneous linear differential equations in chemical kinetics and pharmacokinetics using homotopy perturbation method has been proposed. The mathematical model that depicts the pharmacokinetics is solved. Herein, we report the closed form of an analytical expression for concentrations species for all values of kinetic parameters. These results are compared with numerical results and are found to be in satisfactory agreement. The obtained results are valid for the whole solution domain.
Experimental Investigation of the Effect of Injection of OxyHydrogen Gas on t...IJMERJOURNAL
ABSTRAC: Oxy-Hydrogen gas, H2O2, is a mixture of hydrogen and oxygen produced by water electrolysis. In this work, an experimental exploration was carried out in order to study the effect of the addition of oxy-hydrogen gas into inlet air manifold on speed performance characteristics of a diesel engine at different operating conditions. The experimental work was performed on a test rig comprising a four stroke 5.67 liters water-cooled diesel engine and a Heenan hydraulic dynamometer. Instrumentation included devices for measuring engine speed, load, fuel consumption and inlet air flow rate. The measurements were conducted at 1000, 1500, 2000 and 2500 rpm. At each speed, the engine load was adjusted to 20%, 40% and 80% from the engine full load which corresponds to engine brake mean effective pressures of 1.55, 3.11, and 6.22 bar, respectively, for Oxy-hydrogen generator supplied Currents of 26A and electrolyte concentration of 25 %. The fuel saving percentage and so the brake thermal efficiency for the H2O2 enriched CI engine is more evidently seen at low loads and high-speed conditions. the volumetric efficiency drop was about 5 % at small speeds and reaches to about 2% at higher engine speed.
Hybrid Methods of Some Evolutionary Computations AndKalman Filter on Option P...IJMERJOURNAL
ABSTRACT: The search for a better option price continues within the financial institution. In pricing a put option, holders of the underlying stock always want to make the best decision by maximizing profit. We present an optimal hybrid model among the following combinations: Kalman Filter-Genetic Programming(KF-GP), Kalman Filter-Evolutionary Strategy(KF-ES) and Evolutionary Strategy -Genetic Programming(ES- GP). Our results indicate that the hybrid method involving Kalman Filter-Evolutionary Strategy(KF-ES) is the best model for any investor. Sensitivity analysis was conducted on the model parameters to ascertain the rigidity of the model.
An Efficient Methodology To Develop A Secured E-Learning System Using Cloud C...IJMERJOURNAL
ABSTRACT: Now-a-days, each and every action involved in our life becomes computerized in order to reduce the time, complexity and manual power. The education systems are also being computerized, to train the students in a much efficient way. This system is termed as E-Learning. E-Learning is an Internet-based learning process, in which the Internet technology is used to design, implement, manage and extend learning, which will improve the efficiency of learning. Learning, Teaching and Training are intensely connected components, which are all included in the development of E-Learning system. Cloud Computing provides an efficient platform to support the E-Learning systems, as it can be dramatically changes over time .In this paper, an overview on the new emerging E-Learning system , utilization of the SAAS (Software as a Service) and the methodology to test the efficiency of the person in a secured way are described.
Nigerian Economy and the Impact of Alternative Energy.IJMERJOURNAL
ABSTRACT: Nigeria is endowed with natural resources which are aim at developing the country.The need for alternative energy resources to drive the nation economy cannot be over-emphasized. The incessant power failure has grossly affected the economy, seriously slowing down development in rural and sub-rural settlement. A robust solution must be found to end the crises. Alternative energy source has the potential of solving power problem in Nigeria as well as providing safer and cleaner environment than the fossil fuel. This paper also examines the socio–economic benefits of the alternative energy (solar, wind, biomass, hydro and geothermal) to the nation economy and the utilization of the resources to meet human needs and the generation yet unborn and to provide sustainable development, thereby improve the standard of living and mitigating climate change.
ABSTRACT: Many writers describe empowerment as a process as opposed to a condition or state of being which is being a key feature of empowerment emphasized by many researchers. As a process empowerment becomes difficult to be measured by standard tools available to social scientists. As case study helps in bringing us to understand a complex issue or object and can extend experience or add strength to what is already known through previous research. Case studies also emphasized on detailed contextual analysis of a limited number of events or conditions and their relationships. Researcher had made use of this qualitative research method to examine contemporary real-life situations.
Validation of Maintenance Policy of Steel Plant Machine Shop By Analytic Hier...IJMERJOURNAL
ABSTRACT: In this paper the maintenance activities of the Machine shop of Steel plant have been considered. As maintenance is a routine activity to keep a machine at its normal operating condition so that it can deliver its expected performance without causing any loose of time on account of accidental damage or breakdown and maintenance is a recurring activity. So for reduction of maintenance cost and safety of the operator as well as the residents of nearby area it is better to decide which machines should go for which type of maintenance. For this decision AHP is used and the Expert choice software is used for that calculations and this paper is for the validation of results.
li-fi: the future of wireless communicationIJMERJOURNAL
ABSTRACT: Nowadays people and their electronic devices access wireless internet. But, clogged airwaves are going to make it increasingly difficult to latch onto a reliable signal. So, Radio waves are just one part of the spectrum that can carry our data. We can use “Data through Illumination”. In this process, the data is sent through an LED source that varies in intensity faster than the human eye could follow. The future can be envisioned where the data for laptops, smart phones and all other smart applications is transmitted through the light in our living room. And security would be a snap—if you can’t see the light, you can’t access the data.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
A Compression & Encryption Algorithms on DNA Sequences Using R 2 P & Selective Technique
1. International
OPEN ACCESS Journal
Of Modern Engineering Research (IJMER)
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 39 |
A Compression & Encryption Algorithms on DNA Sequences
Using R2
P & Selective Technique
Syed Mahamud Hossein2,3
, A. Mitra1
,Pradeep Kumar Das Mohapatra2
,
Debashis De3
2,3
Research Scholar, Vidyasagar University, Midnapur-721102
1
Department Of M.C.A.,Hit, Haldia
2
Department Of Microbiology, Vidyasagar University, Midnapur-721102, West Bengal
3
Department Of Computer Science And Engineering, Maulana Abul Kalam Azad University Of
Technology, Bf-142, Sector-I, Kolkata, India.
I. INTRODUCTION
The DNA database are too large [1-8], complex, must contain some logical organization [9-10], hence
data structure to store, access, process this data efficiently is a difficult & very challenging task [11-12]. So it
needs an efficient compression algorithm to store these huge mass of data. The standard compression techniques
[13-14] cannot compress the biological sequences well because the regularities in DNA sequences are much
subtler [15]. The two bit encoding is efficient if the bases are randomly distributed in the sequence, but the life
of an organism is non-random, hence DNA sequences which appear in a living organism are expected to be
nonrandom and have some constraints [15]. Huffman‘s code also fails badly on DNA sequences both in the
static and adaptive model, because the probabilities of occurrence of the four symbols are not very different
[11,15]. There are many repeats[11] within a given DNA sequence (e.g. ATGC), which may occur more than
once in a given DNA sequence. Recently, several algorithms have been proposed for the compression of DNA
sequences based on DNA sequence special structures[11,16-17].Though a lot of works have to be done on
selective encryption of images, videos, speech etc, and not much work has been done on the selective encryption
on compressed DNA sequences[18-19]. But comparing to DNA computing, the research of biological
cryptology attracted less attention [20-24].
This DNA sequences Compression algorithm achieves a moderate compression ratio and runs
significantly faster than any existing compression program on benchmark DNA sequences. This algorithm
developed on the basis of fast and sensitive homology search [25], as our exact R2
P search engine. Proposed
algorithm consists of three phases: i) finding all exact repeat, reverse and palindrome ii) encode R2
P regions
and non-match regions and iii) Encrypt the library file, compress file or in both. Now a day‘s information
security is a most challenging question, how to protect the DNA data from the hackers [26-31]. Selective
ABSTRACT: The size of DNA (Deoxyribonucleic Acid) sequences is varying in the range of millions
to billions of nucleotides and two or three times bigger annually. Therefore efficient lossless
compression technique, data structures to efficiently store, access, secure communicate and search
these large datasets are necessary. This compression algorithm for genetic sequences, based on
searching the exact repeat, reverse and palindrome (R2
P) substring substitution and create a Library
file. The R2
P substring is replaced by corresponding ASCII character where for repeat, selecting ASCII
characters ranging from 33 to 33+72, for reverse from 33+73 to 33+73+72 and for palindrome from
179 to 179+72. The selective encryption technique, the data are encrypted either in the library file or in
compressed file or in both, also by using ASCII code and online library file acting as a signature.
Selective encryption, where a part of message is encrypted keeping the remaining part unencrypted, can
be a viable proposition for running encryption system in resource constraint devices. The algorithm can
approach a moderate compression rate, provide strong data security, the running time is very few
second and the complexity is O(n2
). Also the compressed data again compressed by renounced
compressor for reducing the compression rate & ratio. This techniques can approach a compression
rate of 2.004871bits/base.
Keyword: DNA Sequence, Lossless Compression, ASCII code, Repeat, Reverse, palindrome,
Substitution and Encryption
Abbreviation of R2
P : Repeat, Reverse and Palindrome
2. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 40 |
encryption is the process of selecting a part of a whole message, to begin through the process of encryption,
keeping the remaining portion of the message clear in such a way that the security is not compromised. In the
selective encryption process only a fraction (r) of the whole message or plain text is selected for encryption and
the remaining part is kept in the clear. Selection of the ‗r‘ part is vital for the security point of view in case of
selective encryption; the criteria for selection for ‗r‘ vary according to the type of medium. Intuitively, as ‗r‘
increases, the security level also increases at the cost of increased time of encryption.
This compression method provides two tier security i) the data are compressed, generates two separate
files individually and each file contains ASCII characters ii) Apply selective encryption on library file or
compress file or both. This selective encryption approach not only reduces the time complexity for encryption
and decryption due to encryption of the part of the compressed data where reconstruction information are mostly
concentrated and but also it reduces the storage and communication cost. Also developed specific programm of
our requirement and finding the result on AES, DES and RSA [32-34].
If not otherwise mentioned, we will use lower case letters u, v to denote finite strings over the alphabet
{a, t, g, c}, |u| denotes the length of u, and the number of characters in u.ui is the ith
character of u. ui: j is the
substring of u from position I to position j. The first character of u is ui. Thus u=u1:|u|-1, where ui: j represents the
original substring and |v| denotes the length of v, the number of characters in v. vi is the ith
character of v. vi:j is
the another substring of v from position i to position j. The first character of v is v1. Thus v=v1:|u|-1. ui:j match
with vi:j. The minimum difference between u-v is of substring length. The vi: j represents the repeat, reverse,
palindrome substring. The match found if ui: j= vi: j and count exact maximum R2
P of ui:j. We use Є to denote
empty string and Є =0.
This paper discuss details of the algorithm, provide exponential results and compare the compression
rate, execution time and encryption time [35-42]. Other related algorithms are file size measurement, file
mapping, DNA sequences orientation changing and random string generation. The overall compression process
is two pass where first step output of R2
P is again compress by FreeArc[43] compressor and finally getting final
result.
II. METHODS
2.1: Process diagram
Fig-1 & II : show how to apply compression followed by encryption on compressed file
2.2: File format : File type is text file and blank space ahead the end of file. The output file also text file,
contains the information of both unmatch four base pair and a coded value of ASCII character.
2.3 Generating the substring from input sequence
a t g g t a g t a a t gtacatg …… ...nn
It is clear that for ith
substring Wi .
i, is the starting position of the substring and.
j= (i-1) + l, is end position of the substring; where l is the substring length.
3. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 41 |
The substring length is less than 3(three) has no importance in matching context therefore we consider the
substring size in the range: 3<=1<=n.
Therefore range for I and j are as 1<=i<=n-1+1 and 1<= j<=n respectively.
2.4: searching for exact R2
P
Consider a finite sequence s over the DNA alphabet {a, t, g, c}. As exact R2
P is a substring in s that can
be transferred from another substring in s with edit operations (on repeat, reverse and palindrome, insertion).
Encode these substrings only to match approximate maximum that provides profit on overall compression.
This method of compression is as below:
1. Run the program and output all exact R2
P into a list s in the order of descending scores.
2. Extract a repeat, reverse and palindrome r with highest score from list s, and then replace all r by
corresponding ASCII code into another intermediate list o and place r in library file. Where r is repeat,
reverse and palindrome substring.
3. Process each R2
P in s so that there‘s no overlap with the extracted repeat, reverse and palindrome r.
4. Goto step 2 if the highest score of repeat, reverse and palindrome in s is still higher than a pre-defined
threshold; otherwise exits.
2.5 : Encoding R2
P
An exact R2
P can be presented as two kinds of triplets, first is (l,m,p), where l means the repeat, reverse
and palindrome substring length, m and p show the starting position of two substrings in a R2
P respectively.
Second: replace this operation as expressed (r; p; char), which means replacing the exact repeat, reverse &
palindrome substring at position p by ASCII character char. In order to recover an exact R2
P correctly the
following information must be encoded in the output data stream:
2.6: Decoding
Decoding time first requires online Library file, which was created at the time of encoding the input
file. On this particular value, the encoded input string is decoded and produces the original files.
2.7 Information security
This technique can provide two tier information securities.
This technique can provide two tier information securities.
Tier -I Key-I(R2
P created Lib. File act as a Key)
Tier-II Key-II(Encryption)
Fig -2 : Show the Label I & II security Technique
2.7.1: In tire one; the input sequence contain only 4 bases (a, t, g, c), after compression reduce the file size,
converted from 4 letters to 256 characters, with unmatch a,t,g & c and one substring contains 3 characters, is
replaced by single ASCII characters, so the output file is information secure than input file.
2.7.2: In tire two; Apply selective encryption technique on compressed output file. Selective encryption are
applied in three ways i) Select only single character (any character from 1-256) ASCII characters ii) Select
numeric numbers only iii) Pattern selection. For selection encryption purpose, generate private and public key.
3. Algorithms
3.1: Compression Algorithm:
1. Check for replaced character, if found just shift in right, direct run.
2. Replace the first three consecutive replaceable symbols by the available special symbols in sequential order.
3. Check for the R2
P for the rest of the part of the string, if repeat found replace it by the symbols used for the
replacement of the first three symbols, for reverse and palindrome respectively use the equivalent character of
additive ASCII value 72 and 144 respectively.
Raw Data
Lib. File and
Compress & ASCII
data
4. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 42 |
4. During each pass place one entry in the library file against the original replaceable characters with the
replaced one. Rest, means reverse and palindrome can be calculated during replacement by adding 72 and 144
respectively.
5. Continue step 1 to 4 until no three consecutive replaceable symbol exists.
6. Stop.
3.2. Decompression Algorithm
1. Extract the character
2. Check if it is within ‗a‘,‘t‘,‘g‘,‘c‘ just directly put it, if it not among these characters , replace by equivalent
rumination reading from ‗a‘,‘t‘,‘g‘,‘c‘ by checking it with all replaceable characters entered from library file.
3. If direct matched replace exactly with the entries available in the library, else replace by reverse or
palindrome of that, if match found within the 72 and 144 additive values ASCII character of the given in library
file.
4.Continue until full string lossy either of ‗a‘,‘t‘,‘g‘ and‘c‘.
3.3: Selective Encryption & Decryption Algorithm
3.3.1: Selective Encryption Algorithm
1. Input filename with path.
2. Select number or a specific string.
3. Use RSA algorithm for encryption of the selected number or specified string.
4. Generate an auxiliary file to keep the flag for the specific regions of the encrypted data.
5. Generate the encrypted output file.
6. Generate the Public Key and Private Key ultimately.
3.3.2: Selective Decryption Algorithm:
1. Open Encrypted and Auxiliary file.
2. Input Encryption option.
3. Read encrypted data from Auxiliary file.
4. Use Private Key to decrypt data using RSA module.
5. Get the Decrypted output file.
III. ALGORITHM EVALUATION
4.1: Accuracy
The DNA sequence storage, accuracy must be taken firstly in that even a single base mutation,
insertion, deletion would result in huge change of phenotype. It is not tolerable that any mistake exists either in
compression or decompression. For accuracy purpose develop one by one file mapping algorithm.
4.2 : Efficiency
The initial R2
P algorithm can compress original file from substring l to l characters for any DNA
segment and destination file uses less ASCII character to represent successive DNA bases than source file.
4.3: Space Occupation
Our algorithm reads characters from source file and writes them immediately into destination file. It
costs very small memory space to store only a few characters. The space occupation is in constant level.
5. Experimental result
We tested R2
P technique on standard benchmark data, used in [12,44], definition of the compression
ratio, rate and improvement are also used in [44]. The compression ratio and rate of R2
P are presented in table-I,
including the result of artificial DNA data and Graph-I shown the same. The last two columns show the average
compression and decompression speed in seconds(10-1
) per input byte (average computed over five runs for
each sequence).―encode‖ means compression while ―decode‖ means decompression. Also apply the selective
encryption algorithm on compress data. In table-II showing the AES, DES & RSA result.
5. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 43 |
Table-I : Showing the Compression ratio, rate, selective encryption and speed for the DNA sequences
SequenceOrientation
SequenceName
SequenceSize
Cellular Dna Sequences Artificial Dna Sequences
ReduceFileSize
Byte(C)
Lib.FileSize(L)
WithoutGap
Compression
Ratio
CompressionRate(
Bits/Base)
ApplySelective
Encryption
Algorithm
Encrypt
Time
Compression
Time
ReduceFileSize
Byte(C)
Lib.FileSize(L)
WithoutGap
Compression
Ratio
CompressionRate(
Bits/Base)
ApplySelective
Encryption
Algorithm
Encrypt
Time
Compression
Time
EncodeTime
DecodeTime
Encode
Time(10-1)
DecodeTime
EncodeTime
DecodeTime
EncodeTime
DecodeTime
NormalOrientation
Mtpacga 100314 44784 128 -0.790857 3.581713 No Change <100 <100 33.35 0.329 45220 128 -0.808242 3.616484 No Change <100 <100 33.02 0.329
Mpomtcg 186608 83484 128 -0.792249 3.584498 No Change <100 <100 61.09 0.604 84468 128 -0.813341 3.626683 No Change <100 <100 59.78 0.604
Chntxx 155844 69900 128 -0.797387 3.594774 No Change <100 <100 50.27 0.494 70204 128 -0.80519 3.61038 No Change <100 <100 51.70 0.494
Chmpxx 121024 53886 128 -0.785233 3.570465 No Change <100 <100 37.19 0.384 54484 128 -0.804997 3.609995 No Change <100 <100 37.14 0.439
Humghcsa 66495 29717 128 -0.795323 3.590646 No Change <100 <100 21.86 0.219 29945 128 -0.809038 3.618077 No Change <100 <100 21.70 0.219
Humhbb 73308 32996 128 -0.807388 3.614776 No Change <100 <100 23.62 0.219 32926 128 -0.803569 3.607137 No Change <100 <100 23.68 0.219
Humhdabcd 58864 26470 128 -0.80742 3.614841 No Change <100 <100 18.29 0.219 26486 128 -0.808508 3.617015 No Change <100 <100 19.12 0.164
Humdystrop 38770 17396 128 -0.807996 3.615992 No Change <100 <100 12.41 0.109 17440 128 -0.812535 3.625071 No Change <100 <100 12.58 0.109
Humhprtb 56737 25503 128 -0.807004 3.614008 No Change <100 <100 18.73 0.164 25557 128 -0.810811 3.621623 No Change <100 <100 18.79 0.219
Vaccg 191737 85123 128 -0.778499 3.556997 No Change <100 <100 55.60 0.604 86121 128 -0.799319 3.598638 No Change <100 <100 61.04 0.604
Hehcmvcg 229354 102550 128 -0.790734 3.581468 No Change <100 <100 73.35 0.769 103304 128 -0.803884 3.607768 No Change <100 <100 73.62 0.714
Average 3.592744 3.614442
ReverseOrientation
Mtpacga 100314 44692 128 -0.787188 3.574376 No Change <100 <100 29.06 0.329 45230 128 -0.808641 3.617282 No Change <100 <100 33.02 0.329
Mpomtcg 186608 83780 128 -0.798594 3.597188 No Change <100 <100 57.41 0.604 84132 128 -0.806139 3.612278 No Change <100 <100 59.83 0.604
Chntxx 155844 70090 128 -0.802264 3.604528 No Change <100 <100 46.75 0.494 70178 128 -0.804522 3.609045 No Change <100 <100 48.84 0.494
Chmpxx 121024 53700 128 -0.779085 3.55817 No Change <100 <100 34.94 0.384 54312 128 -0.799313 3.598625 No Change <100 <100 37.19 0.384
Humghcsa 66495 29783 128 -0.799293 3.598586 No Change <100 <100 21.64 0.219 29951 128 -0.809399 3.618798 No Change <100 <100 21.20 0.219
Humhbb 73308 32770 128 -0.795056 3.590113 No Change <100 <100 23.40 0.219 33074 128 -0.811644 3.623288 No Change <100 <100 23.84 0.219
Humhdabcd 58864 26260 128 -0.79315 3.586301 No Change <100 <100 19.12 0.219 26518 128 -0.810682 3.621365 No Change <100 <100 19.01 0.219
Humdystrop 38770 17404 128 -0.808821 3.617643 No Change <100 <100 12.52 0.109 17420 128 -0.810472 3.620944 No Change <100 <100 12.63 0.164
Humhprtb 56737 25445 128 -0.802915 3.60583 No Change <100 <100 18.02 0.219 25655 128 -0.81772 3.635441 No Change <100 <100 17.96 0.274
Vaccg 191737 85817 128 -0.792977 3.585954 No Change <100 <100 57.41 0.604 86283 128 -0.802698 3.605397 No Change <100 <100 61.64 0.604
Hehcmvcg 229354 102002 128 -0.781177 3.562353 No Change <100 <100 74.94 0.714 103238 128 -0.802733 3.605466 No Change <100 <100 74.50 0.769
Average 3.589186 3.61527
ComplementOrientation
Mtpacga 100314 44784 128 -0.790857 3.581713 No Change <100 <100 34.28 0.384 45220 128 -0.808242 3.6164842 No Change <100 <100 32.19 0.329
Mpomtcg 186608 83484 128 -0.792249 3.584498 No Change <100 <100 62.04 0.604 84468 128 -0.813341 3.6266827 No Change <100 <100 58.46 0.604
Chntxx 155844 69900 128 -0.797387 3.594774 No Change <100 <100 50.05 0.494 70204 128 -0.80519 3.6103796 No Change <100 <100 50.38 0.494
Chmpxx 121024 53886 128 -0.785233 3.570465 No Change <100 <100 37.08 0.384 54484 128 -0.804997 3.6099947 No Change <100 <100 37.08 0.384
Humghcsa 66495 29717 128 -0.795323 3.590646 No Change <100 <100 21.70 0.219 29945 128 -0.809038 3.6180765 No Change <100 <100 21.70 0.219
Humhbb 73308 32996 128 -0.807388 3.614776 No Change <100 <100 23.46 0.219 32926 128 -0.803569 3.607137 No Change <100 <100 23.73 0.219
Humhdabcd 58864 26470 128 -0.80742 3.614841 No Change <100 <100 18.13 0.219 26484 128 -0.808372 3.6167437 No Change <100 <100 19.12 0.219
Humdystrop 38770 17396 128 -0.807996 3.615992 No Change <100 <100 12.25 0.109 17440 128 -0.812535 3.6250709 No Change <100 <100 12.52 0.164
Humhprtb 56737 25503 128 -0.807004 3.614008 No Change <100 <100 10.73 0.164 25557 128 -0.810811 3.6216226 No Change <100 <100 18.73 0.164
Vaccg 191737 85123 128 -0.778499 3.556997 No Change <100 <100 55.71 0.604 86121 128 -0.799319 3.5986377 No Change <100 <100 61.09 0.604
Hehcmvcg 229354 102550 128 -0.790734 3.581468 No Change <100 <100 73.35 0.714 103304 128 -0.803884 3.6077679 No Change <100 <100 73.68 0.714
Average 3.592744 3.614418
ReverseComplementOrientation
Mtpacga 100314 44692 128 -0.787188 3.5743765 No Change <100 <100 29.175 0.274 45230 128 -0.808641 3.6172817 No Change <100 <100 33.24 0.32967
Mpomtcg 186608 83780 128 -0.798594 3.5971877 No Change <100 <100 57.52 0.604 84132 128 -0.806139 3.6122781 No Change <100 <100 59.89 0.604
Chntxx 155844 70090 128 -0.802264 3.6045276 No Change <100 <100 46.75 0.494 70178 128 -0.804522 3.6090449 No Change <100 <100 49.06 0.494
Chmpxx 121024 53700 128 -0.779085 3.5581703 No Change <100 <100 35 0.384 54312 128 -0.799313 3.5986251 No Change <100 <100 32.25 0.439
Humghcsa 66495 29783 128 -0.799293 3.5985864 No Change <100 <100 21.70 0.219 29951 128 -0.809399 3.6187984 No Change <100 <100 21.20 0.219
Humhbb 73308 32770 128 -0.795056 3.5901129 No Change <100 <100 23.46 0.274 33074 128 -0.811644 3.623288 No Change <100 <100 24.01 0.274
Humhdabcd 58864 26260 128 -0.79315 3.5863006 No Change <100 <100 19.06 0.219 26518 128 -0.810682 3.6213645 No Change <100 <100 19.06 0.219
Humdystrop 38770 17404 128 -0.808821 3.6176425 No Change <100 <100 12.58 0.109 17420 128 -0.810472 3.620944 No Change <100 <100 12.63 0.109
Humhprtb 56737 25445 128 -0.802915 3.6058304 No Change <100 <100 18.02 0.219 25655 128 -0.81772 3.6354407 No Change <100 <100 18.02 0.164
Vaccg 191737 85817 128 -0.792977 3.5859537 No Change <100 <100 57.63 0.604 86283 128 -0.802698 3.605397 No Change <100 <100 61.81 0.604
Hehcmvcg 229354 102002 128 -0.781177 3.5623534 No Change <100 <100 75.05 0.714 103238 128 -0.802733 3.6054658 No Change <100 <100 74.94 0.714
Average 3.5891856 3.6152662
6. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 44 |
Graph-I : Shown the Compression rate both in cellular & artificial DNA sequences in above algorithm
mentioned table1
8. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 46 |
Table-IV : Comparison of Compression rate
Graph-III: Line chart Shows the comparison of compression ratio of above algorithm in table1V
III. RESULT DISCUSSION
Normal sequence is highly compressible than reveres, complement and reverse complement. Cellular
DNA sequences compression rate and ratio are distinguishable different due to each sequence that come into
different sources (showing in the graph-I) where as artificial DNA sequences compression rate and ratio are
same in all time in all data sets. The AES, DES & RSA encryption algorithm tested on normal cellular DNA
sequences only. Also it was showing that internal R2
P matching patter for cellular DNA sequences are same in
all type of sources and library file plays a key role in finding similarities or regularities in DNA sequences.
Output file contain encrypted data with unmatched a, u, g and c so, it can provide high information security
which is very important for data protection over transmission point of view & to protect nucleotide sequence in
a particular source.
IV. CONCLUSION
This compression algorithm gives a good model for compressing DNA sequences that reveals the true
characteristics of DNA sequences and very useful in database storing. This method is fails to achieve higher
compression rate & ratio than others standard method, but it has provide very high information security.
Important observation are :
a) R2
P substring length vary from 2 to 5 and no match found in case the substring length becoming six or
more. The substring length, three is highly compressible over substring length of four and above.
Dataset
SequenceName
Basepair/File
size
UNIXCompress
gzip-9
lz(32K)
lz(1M)
GenBit
Compress
R2
P+FreeArc
Improvementover
lz(1M)
Dataset-I
MTPACGA 100314 2.12 2.232 2.249 2.285 2.243 2.12 2.02787
10.37%
MPOMTCG 186608 2.20 2.280 2.289 2.326 --- 2.20 2.07751
CHNTXX 155844 2.19 2.291 2.300 2.352 2.232 2.19 2.09830
3CHMPXX 121024 2.09 2.220 2.234 2.276 2.225 2.09 2.00350
3HUMGHCSA 66495 2.19 1.551 1.580 1.513 --- 2.19 1.40052
6HUMHBB 73308 2.20 2.228 2.255 2.286 2.226 2.20 2.06394
9HUMHDABCD 58864 2.21 2.209 2.241 2.264 --- 2.21 2.02228
9HUMDYSTROP 38770 2.23 2.377 2.427 2.432 2.234 2.23 2.20933
7HUMHPRTB 56737 2.20 2.232 2.269 2.287 2.238 2.23 2.06327
4VACCG 191737 2.14 2.190 2.194 2.245 2.237 2.14 1.99623
4HEHCMVCG 229354 2.20 2.279 2.286 2.344 -- 2.20 2.09077
7Average ---- 2.1790 2.189 2.211 2.237 2.2335 2.18 2.00487
1
9. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 47 |
b) The cellular DNA sequence encode codon/amino acid, here library file of size three are play key role to
formation of codon table.
c) This algorithm provide the better data security than other methods. If apply security directly on the cellular
DNA sequence, getting very low label security because DNA sequence contain only four bases, anyone can
hack the data by trial an error methods where as our result show that after compression it has created two
separate file, first one is compress data contain 256 different characters second, file is library life, which is
also contains more than four characters. At the time of transmission if two files are transmit one by one it is
very hard to hack the data. The compressed output contains more characters than input file, in that situation
apply selective encryption technique, enjoy strong of information security and selection encryption options
are more.
d) Compressing the genome sequence will help to increase the effect of their uses. Speed of encryption and
security levels are two important measurements for evaluating any encryption system.
e) The R2
P technique convert the DNA sequence into 256 ASCII characters with unmatch a,t,g and c, in that
situation the Huffman‘s & two bit encoding algorithm is easily apply on DNA sequences.
f) No change in file size before and after selection encryption process applied.
FUTURE WORK
We try to reduce the time complexity, improve compression rate & security.
ACKNOWLEDGEMENT
Above all, author are grateful to all our colleagues for their valuable suggestion, moral support, interest
and constructive criticism of this study. The author offer special thanks to Ph.D guides for helping in carrying
out the research work also like to thank our PCs.
REFERENCES
[1]. International nucleotide sequence database collaboration, (2013),[Online]. Available:
http://www.insdc.org.
[2]. Karsch-Mizrachi, I., Nakamura, Y., and Cochrane, G., 2012, The International Nucleotide
Sequence Database Collaboration, Nucleic Acids Research, 40(1), 33–37.
[3]. Deorowicz, S., and Grabowski, S., 2011, Robust relative compression of genomes with
random access, Bioinformatics, 27(21), 2979–2986.
[4]. Brooksbank, C., Cameron, G., and Thornton, J., 2010, The European Bioinformatics
Institute‘s data resources, Nucleic Acids Research, vol. 38, 17-25.
[5]. Shumway, M., Cochrane, G., and Sugawara, H., 2010, Archiving next generation sequencing
data, Nucleic Acids Research, vol. 38, 870-871.
[6]. Kapushesky, M., Emam, I., Holloway, E., et al. , 2010, Gene expression atlas at the European
bioinformatics institute, Nucleic Acids Research, 38(1), 690-698.
[7]. Ahmed A., Hisham G., Moustafa G., et al., 2010, EGEPT: Monitoring Middle East Genomic
Data, Proc., 5th Cairo International Biomedical Engineering Conf., Egypt, 133-137.
[8]. Korodi, G., Tabus, I., Rissanen, J., et al., 2007, DNA Sequence Compression Based on the
normalized maximum likelihood model, Signal Processing Magazine, IEEE, 24(1), 47-53.
[9]. Mr Deepak Harbola1 et al. State of the art: DNA Compression Algorithms, International
Journal of Advanced Research in Computer Science and Software Engineering, 2013, pp 397-
400.
[10]. A. Postolico, et al., Eds., DNA Compression Challenge Revisited: A Dynamic Programming
Approach, Lecture Notes in Computer Science, Island, Korea: Springer, 2005, vol. 3537,
190–200.
[11]. Nour S. Bakr1, Amr A. Sharawi, ‗DNA Lossless Compression Algorithms: Review ‗,
American Journal of Bioinformatics Research, 2013 pp 72-81
[12]. S. Grumbach and F. Tahi, ―A new challenge for compression algorithms: Genetic sequences,‖
J. Inform. Process. Manage., vol. 30, no. 6, pp. 875-866, 1994.
[13]. X. Chen, S. Kwong and M. Li, ―A Compression Algorithm for DNA Sequences and its
Applications in Genome Comparison,Genome Informatics, 10:52–61, 1999.
[14]. Bell, T.C., Cleary, J.G., and Witten, I.H., Text Compression, Prentice Hall, 1990.
10. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 48 |
[15]. Matsumoto, T., Sadakane, K., and Imai, H., 2000, Biological Sequence Compression
Algorithms, Genome Informatics, 2000,pp 43–52.
[16]. Giancarlo, R., Scaturro, D., and Utro, F., 2009, Textual data compression in computational
biology: a synopsis, Bioinformatics, 25(13), 1575–1586.
[17]. Nalbantog̃ lu, Ö. U., Russell, D.J., and Sayood, K., 2010, Data Compression Concepts and
Algorithms and their Applications to Bioinformatics, Entropy, 12(1), 34-52.
[18]. H. Cheng and X. Li, ―Partial Encryption of Compressed Images and Video,‖ IEEE
Transactions on Signal Processing, 48(8), 2000, pp. 2439-2451.
[19]. Nidhi S Kulkarni et al. Selective Encryption of Multimedia Images, XXXII National System
Conference, NSC 2008, pp 467-470
[20]. Gehani A, LaBean T H, Reif J H. DNA-based cryptography. In: DNA Based Computers V.
Providence, USA: American Mathematical Society, 2000. 233–249
[21]. Clelland C T, Risca V, Bancroft C. Hiding messages in DNA microdots. Nature, 1999, 399:
533–534
[22]. Leier A, Richter C, Banzhaf W, et al. Cryptography with DNA binary strands. Biosystems,
2000, 57: 13–22
[23]. Xiao G Z, Lu M X, Qin L, et al. New field of cryptograhy: DNA cryptography. Chinese Sci
Bull, 2006, 51: 1413–1420
[24]. Lu M X, Lai X J, Xiao G Z, et al. A symmetric-key cryptosystem with DNA technology. Sci
China Ser F-Inf Sci, 2007, 50: 324–333
[25]. Ma,B., Tromp,J. and Li,M. (2002) PatternHunter—faster and more sensitive homology
search. Bioinformatics, 18, 440–445.1698
[26]. Deepak singh chouhan,R.P.Mahajan, ―An architectural framework for encryption and
generation of digital signature using DNA cryptography‖,International Conference on
Computing for Sustainable Global Development(INDIACom),2014.
[27]. Grasha Jacob,A.Murugan, ―DNA based cryptography An overview & analysis‖, International
Journal of emerging science 3(1),(36- 42) march 2013
[28]. L.Xuejia,L.Mingxin,Q.Lei,H.Junsong and F.Xinven, ―Assymmetric encryption and signature
method with DNA technology‖,science in china: Information Science,vol.53,2010,pp.506-
514.
[29]. Xing wang,Qiang zang ―DNA computing based cryptography‖,Fourth international
conference on bio-inspired computing BIC-TA2009 pp1-3.
[30]. Zheng Zhang,Xiaolong shi,JieLiu,‖A method to encrypt information with DNA
computing‖,3rd international conference on bio-inspired computing.Theories and application
2008.pp.155-160
[31]. G.Cui,L.Cuiling,L.Haobin and L.Xiaoguang, ―DNA computing and its application to
information security field‖,IEEE 5 th international conference in National
computation,Tianjian china,Aug 2009,pp.148-152.
[32]. William stallings, ―cryptography and network security principles and practise‖, 5th edition
2011
[33]. Bell, T.C., Cleary, J.G., and Witten, I.H., Text Compression, Prentice Hall, 1990.
[34]. Atul Kahate(2008), Cryptography and Network Security, Tata McGraw-Hill Publicating
Company Ltd.
[35]. Jie Liu et al. A Fixed-Length Coding Algorithm for DNA Sequence Compression (Draft,
Using Bioinformatics LATEX template), 2005, pp1-3
[36]. [36] Sheng Bao et al. A DNA Sequence Compression Algorithm Based on LUT and LZ77,
Conference: Signal Processing and Information Technology, 2005, pp 1-14
[37]. [37]Alok Kumar Shukla et al., Data Encryption and Decryption using Modified RSA
Cryptography Based on Multiple Public Keys and ‗n‘ Prime Number, International Journal Of
Engineering Sciences & Research Technology, 2014, pp 713-720
[38]. [38]Singh et al., Comparative Analysis Of Cryptographic Algorithms,International Journal of
Advanced Engineering Technology, 2013, pp 16-18
11. A Compression & Encryption Algorithms On DNA Sequences Using R2
P & Selective Technique
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 7 | Iss. 2 | Feb. 2017 | 49 |
[39]. [39] Chem,X., et al., DNAcompress : fast and effective dna sequence compression,
Bioinformatics, 2002, 18, 1696-1698
[40]. [40]sansan.phy.ncu.edu.tw/~hclee/ref/CompressingDNAsequence.pdf
[41]. [41]Matsumoto et al.,Can General-Purpose Compression Schemes Really Compress DNA
Sequences?, Current in Computational Molecular Biology, 2000, pp 76-77
[42]. [42]P.Raja Rajeswari et al.,Genbit Compress Tool(Gbc): A Java-Based Tool To Compress
Dna Sequences And Compute Compression Ratio(Bits/Base) Of Genomes, International
Journal of Computer Science and Information Technology, 2010, pp 181-191
[43]. [43] http://freearc.org/
[44]. [44] Xin Chen, San Kwong and Mine Li, ―A Compression Algorithm for DNA Sequences
Using Approximate Matching for Better Compression Ratio to Reveal the True
Characteristics of DNA‖, IEEE Engineering in Medicine and Biology, 2001, pp 61-66