The document discusses data compression using Elias Delta coding. It begins by introducing compression and its purpose of reducing file sizes. It then explains Elias Delta coding which is a lossless compression technique that encodes characters based on their frequency. The more common characters have fewer bits assigned while less common characters have more bits. It provides an example of how Elias Delta coding works by assigning bit sequences to numbers. The document then applies Elias Delta coding to compress a sample text, showing the original string, character set formation, bit lengths, and compressed output which achieved a smaller size than the original text.
The document provides an overview of data representation in computers. It discusses how computers use binary numbers to represent data, including integers, real numbers, text, and graphics. Binary numbers are explained along with how to convert between binary and decimal. Different methods for representing negative numbers and real numbers are described. The document also discusses how computers represent text using ASCII and Unicode encoding. Finally, it covers graphics representation in computers, including bit-mapped graphics, calculating memory requirements for images, arranging bytes that make up an image, representing grayscale and color images, compression techniques, and vector graphics.
This document provides an overview of data representation and computer structure. It discusses how computers use binary numbers to represent data, including integers, real numbers, text, and graphics. It also describes the basic structure of a computer, including the central processing unit (CPU) with its arithmetic logic unit (ALU) and control unit. The document outlines the stored program concept where a series of machine instructions stored in memory direct the CPU. It also explains the fetch-execute cycle where the CPU fetches and executes one instruction at a time. Memory types like RAM, ROM, cache and external memory are described along with their functions in a computer system.
The document provides information about data representation in computers. It discusses how computers use binary numbers to represent decimal numbers, text, and graphics. It explains how integers, real numbers, text in ASCII, and graphics in bitmapped and vector formats are represented and stored in memory. Color graphics using RGB values and compression techniques for bitmapped images are also covered.
This document discusses how data is represented in computer systems. It covers basic units of data like bits and bytes and larger units like kilobytes and megabytes. It also explains binary and hexadecimal number systems. Additionally, it discusses how other data types like characters, images, sound, and computer instructions are represented and stored in binary format. Key concepts covered include character sets, pixels, metadata, sample rates, bit rates, opcodes, and operands.
In this study, we are doing a cryptography scheme which can modify the visualization of pictures. The protection of images is critical. This protection does not alter the value of the header and the metadata. Every image consists of three color layers. There are red, green and blue. Each layer has numbers which represent the color intensity. RC4 is used to change the color intensity in every layer. We can choose how many layers will be encrypted. The fuzziness of the encrypted image depends on how many layer are taking a role.
An introduction to mitigation for data hiding in lossless imagesThomas Brown, CISSP
A look at methods that can be used to hide private data inside images and a brief discussion on methods that could be employed to discourage these activities.
This document provides an overview of lossless data compression techniques. It discusses Huffman coding, Shannon-Fano coding, and Run Length Encoding as common lossless compression algorithms. Huffman coding assigns variable length binary codes to symbols based on their frequency, with more common symbols getting shorter codes. Shannon-Fano coding similarly generates a binary tree to assign codes but aims for a roughly equal probability between left and right subtrees. Run Length Encoding replaces repeated sequences with the length of the run and the symbol. The document contrasts lossless techniques that preserve all data with lossy techniques used for media that can tolerate some loss of information.
This document discusses information theory and data compression. It covers three main topics:
1) Types of data compression including lossless compression, which retains all original data, and lossy compression, which permanently eliminates some information.
2) Compression methods like removing spaces, using single characters to represent repeated characters, and substituting smaller bit sequences for recurring characters.
3) Continuous amplitude signals, which are varying quantities over a continuum like time, and examples of finite vs infinite duration signals.
The document provides an overview of data representation in computers. It discusses how computers use binary numbers to represent data, including integers, real numbers, text, and graphics. Binary numbers are explained along with how to convert between binary and decimal. Different methods for representing negative numbers and real numbers are described. The document also discusses how computers represent text using ASCII and Unicode encoding. Finally, it covers graphics representation in computers, including bit-mapped graphics, calculating memory requirements for images, arranging bytes that make up an image, representing grayscale and color images, compression techniques, and vector graphics.
This document provides an overview of data representation and computer structure. It discusses how computers use binary numbers to represent data, including integers, real numbers, text, and graphics. It also describes the basic structure of a computer, including the central processing unit (CPU) with its arithmetic logic unit (ALU) and control unit. The document outlines the stored program concept where a series of machine instructions stored in memory direct the CPU. It also explains the fetch-execute cycle where the CPU fetches and executes one instruction at a time. Memory types like RAM, ROM, cache and external memory are described along with their functions in a computer system.
The document provides information about data representation in computers. It discusses how computers use binary numbers to represent decimal numbers, text, and graphics. It explains how integers, real numbers, text in ASCII, and graphics in bitmapped and vector formats are represented and stored in memory. Color graphics using RGB values and compression techniques for bitmapped images are also covered.
This document discusses how data is represented in computer systems. It covers basic units of data like bits and bytes and larger units like kilobytes and megabytes. It also explains binary and hexadecimal number systems. Additionally, it discusses how other data types like characters, images, sound, and computer instructions are represented and stored in binary format. Key concepts covered include character sets, pixels, metadata, sample rates, bit rates, opcodes, and operands.
In this study, we are doing a cryptography scheme which can modify the visualization of pictures. The protection of images is critical. This protection does not alter the value of the header and the metadata. Every image consists of three color layers. There are red, green and blue. Each layer has numbers which represent the color intensity. RC4 is used to change the color intensity in every layer. We can choose how many layers will be encrypted. The fuzziness of the encrypted image depends on how many layer are taking a role.
An introduction to mitigation for data hiding in lossless imagesThomas Brown, CISSP
A look at methods that can be used to hide private data inside images and a brief discussion on methods that could be employed to discourage these activities.
This document provides an overview of lossless data compression techniques. It discusses Huffman coding, Shannon-Fano coding, and Run Length Encoding as common lossless compression algorithms. Huffman coding assigns variable length binary codes to symbols based on their frequency, with more common symbols getting shorter codes. Shannon-Fano coding similarly generates a binary tree to assign codes but aims for a roughly equal probability between left and right subtrees. Run Length Encoding replaces repeated sequences with the length of the run and the symbol. The document contrasts lossless techniques that preserve all data with lossy techniques used for media that can tolerate some loss of information.
This document discusses information theory and data compression. It covers three main topics:
1) Types of data compression including lossless compression, which retains all original data, and lossy compression, which permanently eliminates some information.
2) Compression methods like removing spaces, using single characters to represent repeated characters, and substituting smaller bit sequences for recurring characters.
3) Continuous amplitude signals, which are varying quantities over a continuum like time, and examples of finite vs infinite duration signals.
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
Comparision Of Various Lossless Image Compression TechniquesIJERA Editor
Today images are considered as the major information tanks in the world. They can convey a lot more information to the receptor then a few pages of written information. Due to this very reason image processing has become a field of research today. The processing are basically are of two types; lossy and lossless. Since the information is power, so having it complete and discrete is of great importance today. Hence in such cases lossless techniques are the best options. This paper deals with the comparison of different lossless image compression techniques available today.
the compression of images is an important step before we start the processing of larger images or videos. The compression of images is carried out by an encoder and output a compressed form of an image. In the processes of compression, the mathematical transforms play a vital role.
1) The document discusses a technique for hiding encrypted text in an image using end of file steganography and Vernam encryption. It describes converting a plaintext message to ASCII codes, encrypting it with Vernam cipher using a repeated key, and inserting the encrypted bytes in the end of pixel values in the red, green, or blue channels of an image.
2) It provides details on the end of file technique, Vernam cipher encryption, and demonstrates the process on a sample 8x8 pixel image. Encrypted ciphertext is generated from the plaintext and key and inserted into the pixel tables.
3) The study concludes that applying cryptography like Vernam cipher in steganography improves data security, as even if the
The document discusses structures for data compression. It begins by introducing general compression concepts like lossless versus lossy compression. It then distinguishes between vector and raster data, describing how each is structured and stored. For vectors, it discusses storing points and lines more efficiently. For rasters, it explains how resolution affects file size and covers storage methods like tiles. Overall, the document provides an overview of data compression techniques for different data types.
Image Compression Through Combination Advantages From Existing TechniquesCSCJournals
The tremendous growth of digital data has led to a high necessity for compressing applications either to minimize memory usage or transmission speed. Despite of the fact that many techniques already exist, there is still space and need for new techniques in this area of study. With this paper we aim to introduce a new technique for data compression through pixel combinations, used for both lossless and lossy compression. This new technique is also able to be used as a standalone solution, or with some other data compression method as an add-on providing better results. It is here applied only on images but it can be easily modified to work on any other type of data. We are going to present a side-by-side comparison, in terms of compression rate, of our technique with other widely used image compression methods. We will show that the compression ratio achieved by this technique tanks among the best in the literature whilst the actual algorithm remains simple and easily extensible. Finally the case will be made for the ability of our method to intrinsically support and enhance methods used for cryptography, steganography and watermarking.
Data compression reduces the size of a data file by identifying and eliminating statistical and perceptual redundancies. There are two main types: lossless compression, which reduces file size by encoding data more efficiently without loss of information, and lossy compression, which provides greater reduction by removing unnecessary data, resulting in information loss. Popular lossy audio and video compression formats like MP3, JPEG, and MPEG exploit patterns and limitations in human perception to greatly reduce file sizes for storage and transmission with minimal impact on quality. [/SUMMARY]
These slides cover the fundamentals of data communication & networking. it covers Data compression which compression data for transmitted over communication channel. It is useful for engineering students & also for the candidates who want to master data communication & computer networking.
Computer Science (A Level) discusses data compression techniques. Compression reduces the number of bits required to represent data to save disk space and increase transfer speeds. There are two main types of compression: lossy compression, which permanently removes non-essential data and can reduce quality, and lossless compression, which identifies patterns to compress data without any loss. Common lossy techniques are JPEG, MPEG, and MP3, while common lossless techniques are run length encoding and dictionary encoding.
Affable Compression through Lossless Column-Oriented Huffman Coding TechniqueIOSR Journals
This document discusses a lossless column-oriented compression technique using Huffman coding. It begins by explaining that column-oriented data compression is more efficient than row-oriented compression because values within the same attribute are more correlated. It then proposes compressing and decompressing column-oriented data images using the Huffman coding technique. Finally, it implements a software algorithm to compress and decompress column-oriented databases using Huffman coding in MATLAB.
Comparison between Lossy and Lossless Compressionrafikrokon
This presentation compares lossy and lossless compression. It discusses the group members, topics to be covered including definitions of compression, lossless compression, and lossy compression. It explains that lossless compression allows exact recovery of original data while lossy compression involves some data loss. Lossy compression removes non-essential data and has data degradation but is cheaper and requires less space and time. Lossless compression works well with repeated data and allows exact data recovery but requires more space and time. The presentation discusses uses of each compression type and their advantages and disadvantages.
IMAGE COMPRESSION AND DECOMPRESSION SYSTEMVishesh Banga
Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
Radical Data Compression Algorithm Using FactorizationCSCJournals
This work deals with encoding algorithm that conveys a message that generates a “compressed” form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
Drubbing an Audio Messages inside a Digital Image Using (ELSB) MethodIOSRJECE
It is mainly focused today to transfer the messages secretly between two communication parties. The message from the sender to receiver should be kept secret so that the information should not known by anyone. Secret is the important thing today. The technique that is used for secure communication is called as steganography and it means that to hide secret information into innocent data. Digital images are ideal for hiding secret information. An image containing a secret message is called a cover image. In this paper will discuss about secret transformation of audio messages. The audio messages are hidden inside a cover image so no one can hack the audio but the audio should be encrypted before hidden inside the image
Highly secure scalable compression of encrypted imageseSAT Journals
Abstract A highly secure scalable compression method for stream cipher encrypted images is described in this journal. The input image first undergoes encryption and then shuffling. This shuffling in the image pixels enhances the security. For shuffling, Henon map is used. There are two layers for the scalable compression namely base layer and enhancement layer. Base layer bits are produced by coding a series of non-overlapping patches of uniformly down sampled version of encrypted image. In the enhancement layer pixels are selected by random permutation and then coded. From all the available pixel samples an iterative multi scale technique is used to reconstruct the image and finally performs decryption. The proposed method has high security. Key Words: Encryption, Decryption, Shuffling, Scalable compression
Data compression reduces the size of data files by removing redundant information while preserving the essential content. It aims to reduce storage space and transmission times. There are two main types of compression: lossless, which preserves all original data, and lossy, which sacrifices some quality for higher compression ratios. Common lossless methods are run-length encoding, Huffman coding, and Lempel-Ziv encoding, while lossy methods include JPEG, MPEG, and MP3.
This document provides an overview of image compression. It discusses what image compression is, why it is needed, common terminology used, entropy, compression system models, and algorithms for image compression including lossless and lossy techniques. Lossless algorithms compress data without any loss of information while lossy algorithms reduce file size by losing some information and quality. Common lossless techniques mentioned are run length encoding and Huffman coding while lossy methods aim to form a close perceptual approximation of the original image.
This document discusses different data compression algorithms. It defines data compression as a technique that makes files smaller in size while preserving the original data. There are two main reasons for data compression: to reduce the large amount of space used by files, and to speed up downloading of large files. It describes lossless compression, which allows perfect reconstruction of data, and lossy compression, which results in some loss of quality or information. Some common algorithms discussed are Run-Length Encoding, Huffman Coding, Lempel-Ziv-Welch Encoding, and Discrete Cosine Transform. Run-Length Encoding replaces repeated characters with codes to indicate repetition count. Huffman Coding assigns shorter codes to more common characters.
The security and speed of data transmission is very important in data communications, the steps that can be done is to use the appropriate cryptographic and compression algorithms in this case is the Data Encryption Standard and Lempel-Ziv-Welch algorithms combined to get the data safe and also the results good compression so that the transmission process can run properly, safely and quickly.
The problem of electric power quality is a matter of changing the form of voltage, current or frequency that can cause failure of equipment, either utility equipment or consumer property. Components of household equipment there are many nonlinear loads, one of which Mixer. Even a load nonlinear current waveform and voltage is not sinusoidal. Due to the use of household appliances such as mixers, it will cause harmonics problems that can damage the electrical system equipment. This study analyzes the percentage value of harmonics in Mixer and reduces harmonics according to standard. Measurements made before the use of LC passive filter yield total current harmonic distortion value (THDi) is 61.48%, while after passive filter use LC the THDi percentage becomes 23.75%. The order of harmonic current in the 3rd order mixer (IHDi) is 0.4185 A not according to standard, after the use of LC passive filter to 0.088 A and it is in accordance with the desired standard, and with the use of passive filter LC, the power factor value becomes better than 0.75 to 0.98.
More Related Content
Similar to Data Compression Using Elias Delta Code
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
Comparision Of Various Lossless Image Compression TechniquesIJERA Editor
Today images are considered as the major information tanks in the world. They can convey a lot more information to the receptor then a few pages of written information. Due to this very reason image processing has become a field of research today. The processing are basically are of two types; lossy and lossless. Since the information is power, so having it complete and discrete is of great importance today. Hence in such cases lossless techniques are the best options. This paper deals with the comparison of different lossless image compression techniques available today.
the compression of images is an important step before we start the processing of larger images or videos. The compression of images is carried out by an encoder and output a compressed form of an image. In the processes of compression, the mathematical transforms play a vital role.
1) The document discusses a technique for hiding encrypted text in an image using end of file steganography and Vernam encryption. It describes converting a plaintext message to ASCII codes, encrypting it with Vernam cipher using a repeated key, and inserting the encrypted bytes in the end of pixel values in the red, green, or blue channels of an image.
2) It provides details on the end of file technique, Vernam cipher encryption, and demonstrates the process on a sample 8x8 pixel image. Encrypted ciphertext is generated from the plaintext and key and inserted into the pixel tables.
3) The study concludes that applying cryptography like Vernam cipher in steganography improves data security, as even if the
The document discusses structures for data compression. It begins by introducing general compression concepts like lossless versus lossy compression. It then distinguishes between vector and raster data, describing how each is structured and stored. For vectors, it discusses storing points and lines more efficiently. For rasters, it explains how resolution affects file size and covers storage methods like tiles. Overall, the document provides an overview of data compression techniques for different data types.
Image Compression Through Combination Advantages From Existing TechniquesCSCJournals
The tremendous growth of digital data has led to a high necessity for compressing applications either to minimize memory usage or transmission speed. Despite of the fact that many techniques already exist, there is still space and need for new techniques in this area of study. With this paper we aim to introduce a new technique for data compression through pixel combinations, used for both lossless and lossy compression. This new technique is also able to be used as a standalone solution, or with some other data compression method as an add-on providing better results. It is here applied only on images but it can be easily modified to work on any other type of data. We are going to present a side-by-side comparison, in terms of compression rate, of our technique with other widely used image compression methods. We will show that the compression ratio achieved by this technique tanks among the best in the literature whilst the actual algorithm remains simple and easily extensible. Finally the case will be made for the ability of our method to intrinsically support and enhance methods used for cryptography, steganography and watermarking.
Data compression reduces the size of a data file by identifying and eliminating statistical and perceptual redundancies. There are two main types: lossless compression, which reduces file size by encoding data more efficiently without loss of information, and lossy compression, which provides greater reduction by removing unnecessary data, resulting in information loss. Popular lossy audio and video compression formats like MP3, JPEG, and MPEG exploit patterns and limitations in human perception to greatly reduce file sizes for storage and transmission with minimal impact on quality. [/SUMMARY]
These slides cover the fundamentals of data communication & networking. it covers Data compression which compression data for transmitted over communication channel. It is useful for engineering students & also for the candidates who want to master data communication & computer networking.
Computer Science (A Level) discusses data compression techniques. Compression reduces the number of bits required to represent data to save disk space and increase transfer speeds. There are two main types of compression: lossy compression, which permanently removes non-essential data and can reduce quality, and lossless compression, which identifies patterns to compress data without any loss. Common lossy techniques are JPEG, MPEG, and MP3, while common lossless techniques are run length encoding and dictionary encoding.
Affable Compression through Lossless Column-Oriented Huffman Coding TechniqueIOSR Journals
This document discusses a lossless column-oriented compression technique using Huffman coding. It begins by explaining that column-oriented data compression is more efficient than row-oriented compression because values within the same attribute are more correlated. It then proposes compressing and decompressing column-oriented data images using the Huffman coding technique. Finally, it implements a software algorithm to compress and decompress column-oriented databases using Huffman coding in MATLAB.
Comparison between Lossy and Lossless Compressionrafikrokon
This presentation compares lossy and lossless compression. It discusses the group members, topics to be covered including definitions of compression, lossless compression, and lossy compression. It explains that lossless compression allows exact recovery of original data while lossy compression involves some data loss. Lossy compression removes non-essential data and has data degradation but is cheaper and requires less space and time. Lossless compression works well with repeated data and allows exact data recovery but requires more space and time. The presentation discusses uses of each compression type and their advantages and disadvantages.
IMAGE COMPRESSION AND DECOMPRESSION SYSTEMVishesh Banga
Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
Radical Data Compression Algorithm Using FactorizationCSCJournals
This work deals with encoding algorithm that conveys a message that generates a “compressed” form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
Drubbing an Audio Messages inside a Digital Image Using (ELSB) MethodIOSRJECE
It is mainly focused today to transfer the messages secretly between two communication parties. The message from the sender to receiver should be kept secret so that the information should not known by anyone. Secret is the important thing today. The technique that is used for secure communication is called as steganography and it means that to hide secret information into innocent data. Digital images are ideal for hiding secret information. An image containing a secret message is called a cover image. In this paper will discuss about secret transformation of audio messages. The audio messages are hidden inside a cover image so no one can hack the audio but the audio should be encrypted before hidden inside the image
Highly secure scalable compression of encrypted imageseSAT Journals
Abstract A highly secure scalable compression method for stream cipher encrypted images is described in this journal. The input image first undergoes encryption and then shuffling. This shuffling in the image pixels enhances the security. For shuffling, Henon map is used. There are two layers for the scalable compression namely base layer and enhancement layer. Base layer bits are produced by coding a series of non-overlapping patches of uniformly down sampled version of encrypted image. In the enhancement layer pixels are selected by random permutation and then coded. From all the available pixel samples an iterative multi scale technique is used to reconstruct the image and finally performs decryption. The proposed method has high security. Key Words: Encryption, Decryption, Shuffling, Scalable compression
Data compression reduces the size of data files by removing redundant information while preserving the essential content. It aims to reduce storage space and transmission times. There are two main types of compression: lossless, which preserves all original data, and lossy, which sacrifices some quality for higher compression ratios. Common lossless methods are run-length encoding, Huffman coding, and Lempel-Ziv encoding, while lossy methods include JPEG, MPEG, and MP3.
This document provides an overview of image compression. It discusses what image compression is, why it is needed, common terminology used, entropy, compression system models, and algorithms for image compression including lossless and lossy techniques. Lossless algorithms compress data without any loss of information while lossy algorithms reduce file size by losing some information and quality. Common lossless techniques mentioned are run length encoding and Huffman coding while lossy methods aim to form a close perceptual approximation of the original image.
This document discusses different data compression algorithms. It defines data compression as a technique that makes files smaller in size while preserving the original data. There are two main reasons for data compression: to reduce the large amount of space used by files, and to speed up downloading of large files. It describes lossless compression, which allows perfect reconstruction of data, and lossy compression, which results in some loss of quality or information. Some common algorithms discussed are Run-Length Encoding, Huffman Coding, Lempel-Ziv-Welch Encoding, and Discrete Cosine Transform. Run-Length Encoding replaces repeated characters with codes to indicate repetition count. Huffman Coding assigns shorter codes to more common characters.
Similar to Data Compression Using Elias Delta Code (20)
The security and speed of data transmission is very important in data communications, the steps that can be done is to use the appropriate cryptographic and compression algorithms in this case is the Data Encryption Standard and Lempel-Ziv-Welch algorithms combined to get the data safe and also the results good compression so that the transmission process can run properly, safely and quickly.
The problem of electric power quality is a matter of changing the form of voltage, current or frequency that can cause failure of equipment, either utility equipment or consumer property. Components of household equipment there are many nonlinear loads, one of which Mixer. Even a load nonlinear current waveform and voltage is not sinusoidal. Due to the use of household appliances such as mixers, it will cause harmonics problems that can damage the electrical system equipment. This study analyzes the percentage value of harmonics in Mixer and reduces harmonics according to standard. Measurements made before the use of LC passive filter yield total current harmonic distortion value (THDi) is 61.48%, while after passive filter use LC the THDi percentage becomes 23.75%. The order of harmonic current in the 3rd order mixer (IHDi) is 0.4185 A not according to standard, after the use of LC passive filter to 0.088 A and it is in accordance with the desired standard, and with the use of passive filter LC, the power factor value becomes better than 0.75 to 0.98.
This paper examines the long-term simultaneous response between dividend policy and corporate value. The main problem studied is that the dividend policy is responded very slowly to the final goal of corporate value. Analysis of Data was using Vector Autoregression (VAR). The result of the discussion concludes the effect of different simultaneous response every period between dividend policy with corporate value, short-term, medium-term, and long-term. The strongest response to dividend changes comes from free cash flow whereas the highest response to corporate value comes from market book value.
Whatsapp is a social media application that is currently widely used from various circles due to ease of use and security is good enough, the security at the time of communicating at this time is very important as well with Whatsapp. Whatsapp from the network is very secure but on the local storage that contains the message was not safe enough because the message on local storage is not secured properly using a special algorithm even using the software Whatsapp Database Viewer whatsapp message can be known, to improve the security of messages on local storage whatsapp submitted security enhancements using the Modular Multiplication Block Cipher algorithm so that the message on whatsapp would be better in terms of security and not easy to read by unauthorized ones.
Consumers are increasingly easy to access to information resources. Consumers quickly interact with whatever they will spend. Ease of use of technology an impact on consumer an attitude are increasingly intelligent and has encouraged the rise of digital transactions. Technology makes it easy for them to transact on an e-commerce shopping channel. Future e-commerce trends will lead to User Generated Content related to user behavior in Indonesia that tends to compare between shopping channels. The purpose of this study was to examine the direct and indirect effects of Perceived Ease of Use on Behavioral Intention to transact in which Perceived Usefulness is used as an intervening variable. The present study used the descriptive exploratory method with causal-predictive analysis. Determination method of research sample used purposive sampling. The enumerator team assists in the distribution of questionnaires. The results of the study found that the direct effect of perceived ease of use on behavioral intention to transact is smaller than that indirectly mediated by perceived usefulness variables.
Performance is a process of assessment of the algorithm. Speed and security is the performance to be achieved in determining which algorithm is better to use. In determining the optimum route, there are two algorithms that can be used for comparison. The Genetic and Primary algorithms are two very popular algorithms for determining the optimum route on the graph. Prim can minimize circuit to avoid connected loop. Prim will determine the best route based on active vertex. This algorithm is especially useful when applied in a minimum spanning tree case. Genetics works with probability properties. Genetics cannot determine which route has the maximum value. However, genetics can determine the overall optimum route based on appropriate parameters. Each algorithm can be used for the case of the shortest path, minimum spanning tree or traveling salesman problem. The Prim algorithm is superior to the speed of Genetics. The strength of the Genetic algorithm lies in the number of generations and population generated as well as the selection, crossover and mutation processes as the resultant support. The disadvantage of the Genetic algorithm is spending to much time to get the desired result. Overall, the Prim algorithm has better performance than Genetic especially for a large number of vertices.
Implementation of Decision Support System for various purposes now can facilitate policy makers to get the best alternative from a variety of predefined criteria, one of the methods used in the implementation of Decision Support System is VIKOR (Vise Kriterijumska Optimizacija I Kompromisno Resenje), VIKOR method in this research got the best results with an efficient and easily understood process computationally, it is expected that the results of this study facilitate various parties to develop a model any solutions.
Edge detection is one of the most frequent processes in digital image processing for various purposes, one of which is detecting road damage based on crack paths that can be checked using a Canny algorithm. This paper proposed a mobile application to detect cracks in the road and with customized threshold function in the requests to produce useful and accurate edge detection. The experimental results show that the use of threshold function in a canny algorithm can detect better damage in the road
The security and confidentiality of information becomes an important factor in communication, the use of cryptography can be a powerful way of securing the information, IDEA (International Data Encryption Algorithm) and WAKE (Word Auto Key Encryption) are some modern symmetric cryptography algorithms with encryption and decryption function are much faster than the asymmetric cryptographic algorithm, with the combination experiment IDEA and WAKE it probable to produce highly secret ciphertext and it hopes to take a very long time for cryptanalyst to decrypt the information without knowing the key of the encryption process.
Employees are the backbone of corporate activities and the giving of bonuses, job titles and allowances to employees to motivate the work of employees is very necessary, salesman on the company very much and to find the best salesman cannot be done manually and for that required the implementation of a system in this decision support system by applying the TOPSIS method, it is expected with the implementation of TOPSIS method the expected results of top management can be fulfilled.
English is a language that must be known all-digital era at this time where almost all information is in English, ranging from kindergarten to college learn English. elementary school is now also there are learning and to help introduce English is prototype application recogni-tion of common words in English and can be updated dynamically so that updates occur information to new words and sentences in Eng-lish to be introduced to students.
The selection of the best employees is one of the process of evaluating how well the performance of the employees is adjusted to the standards set by the company and usually done by top management such as General Manager or Director. In general, the selection of the best employees is still perform manually with many criteria and alternatives, and this usually make it difficult top managerial making decisions as well as the selection of the best employees periodically into a long and complicated process. Therefore, it is necessary to build a decision support system that can help facilitate the decision maker in determining the best choice based on standard criteria, faster, and more objective. In this research, the computational method of decision-making system used is Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The criteria used in the selection of the best employees are: job responsibilities, work discipline, work quality, and behaviour. The final result of the global priority value of the best employee candidates is used as the best employee selection decision making tool by top management.
Rabin Karp algorithm is a search algorithm that searches for a substring pattern in a text using hashing. It is beneficial for matching words with many patterns. One of the practical applications of Rabin Karp's algorithm is in the detection of plagiarism. Michael O. Rabin and Richard M. Karp invented the algorithm. This algorithm performs string search by using a hash function. A hash function is the values that are compared between two documents to determine the level of similarity of the document. Rabin-Karp algorithm is not very good for single pattern text search. This algorithm is perfect for multiple pattern search. The Levenshtein algorithm can be used to replace the hash calculation on the Rabin-Karp algorithm. The hash calculation on Rabin-Karp only counts the number of hashes that have the same value in both documents. Using the Levenshtein algorithm, the calculation of the hash distance in both documents will result in better accuracy.
This document summarizes a research paper about violations of cybercrime and jurisdiction in Indonesia. It discusses how technological advances have enabled new forms of digital crime. It describes several types of cybercrimes such as unauthorized access, spreading viruses, hacking, and cyberterrorism. It also discusses Indonesia's laws regarding electronic information and cybercrime. The document analyzes some challenges around jurisdiction for cybercrimes that cross borders. It examines how Indonesia applies legal jurisdiction and sanctions to cybercrime perpetrators based on the location of the crime and perpetrator. It aims to explain how Indonesia's legal system will handle cybercrime cases and reduce such violations.
Competitive market competition so the company must be smart in managing finance. In promoting the selling point, marketing is the most important step to be considered. Promotional routine activity is one of the marketing techniques to increase consumer appeal to marketed products. One of the important agendas of promotion is the selection of the most appropriate promotional media. The problem that often occurs in the process of selecting a promotional media is the subjectivity of decision making. Marketing activities have a taxation fund that must be issued. Limited funds are one of the constraints of improving market strategy. So far, the selection of promotional media is performed by the company manually using standardized determination that already applies. It has many shortcomings, among others, regarding effectiveness and efficiency of time and limited funds. Markov Chain is very helpful to the company in analyzing the development of the company over a period. This method can predict the market share in the future so that company can optimize promotion cost at the certain time. Implementation of this algorithm produces a percentage of market share so that businesses can determine and choose which way is more appropriate to improve the company's market strategy. Assessment is done by looking at consumer criteria of a particular product. These criteria can determine consumer interest in a product so that it can be analyzed consumer behavior.
The transition of copper cable technology to fiber optic is very triggering the development of technology where data can be transmitted quickly and accurately. This cable change can be seen everywhere. This cable is an expensive cable. If it is not installed optimally, it will cost enormously. This excess cost can be used to other things to support performance rather than for excess cable that should be minimized. Determining how much cable use at the time of installation is difficult if done manually. Prim's algorithm can optimize by calculating the minimum spanning tree on branches used for fiber optic cable installation. This algorithm can be used to shorten the time to a destination by making all the points interconnected according to the points listed. Use of this method helps save the cost of fiber optic construction.
An image is a medium for conveying information. The information contained therein may be a particular event, experience or moment. Not infrequently many images that have similarities. However, this level of similarity is not easily detected by the human eye. Eigenface is one technique to calculate the resemblance of an object. This technique calculates based on the intensity of the colors that exist in the two images compared. The stages used are normalization, eigenface, training, and testing. Eigenface is used to calculate pixel proximity between images. This calculation yields the feature value used for comparison. The smallest value of the feature value is an image very close to the original image. Application of this method is very helpful for analysts to predict the likeness of digital images. Also, it can be used in the field of steganography, digital forensic, face recognition and so forth.
Technological developments in computer networks increasingly demand security on systems built. Security also requires flexibility, efficiency, and effectiveness. The exchange of information through the internet connection is a common thing to do now. However, this way can be able to trigger data theft or cyber crime which resulted in losses for both parties. Data theft rate is getting higher by using a wireless network. The wireless system does not have any signal restrictions that can be intercepted Filtering is used to restrict incoming access through the internet. It aims to avoid intruders or people who want to steal data. This is fatal if not anticipated. IP and MAC filtering is a way to protect wireless networks from being used and misused by just anyone. This technique is very useful for securing data on the computer if it joins the public network. By registering IP and MAC on a router, this will keep the information unused and stolen. This system is only a few computers that can be connected to a wireless hotspot by IP and MAC Address listed.
Catfish is one type of freshwater fish. This fish has a good taste. In the cultivation of these fish, many obstacles need to be faced. Because living in dirty water, this type of fish is susceptible to disease. Many symptoms arise during the fish cultivation process; From skin disease to physical. Catfish farmers do not know how to diagnose diseases that exist in their livestock. This diagnosis serves to separate places between good and sick catfish. The goal is that the sale value of the fish is high. Catfish that have diseases will be sold cheaper to be used as other animal feed while healthy fish will be sold to the market or exported to other countries. Diagnosis can be done by expert system method. The algorithm of certainty factor is one of the good algorithms to determine the percentage of possible fish disease. This algorithm is very helpful for farmers to improve catfish farming.
Documents do not always have the same content. However, the similarity between documents often occurs in the world of writing scientific papers. Some similarities occur because of a coincidence, but something happens because of the element of intent. On documents that have little content, this can be checked by the eyes. However, on documents that have thousands of lines and pages, of course, it is impossible. To anticipate it, it takes a way that can analyze plagiarism techniques performed. Many methods can examine the resemblance of documents, one of them by using the Rabin-Karp algorithm. The algorithm is very well since it has a determination for syllable cuts (K-Grams). This algorithm looks at how many hash values are the same in both documents. The percentage of plagiarism can also be adjusted up to a few percent according to the need for examination of the document. Implementation of this algorithm is beneficial for an institution to do the filtering of incoming documents. It is usually done at the time of receipt of a scientific paper to be published.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
1. DOI: 10.23883/IJRTER.2017.3406.TEGS6 210
Data Compression Using Elias Delta Code
Leni Marlina1
, Andysah Putera Utama Siahaan2
, Heri Kurniawan3
, Indri Sulistianingsih4
Faculty of Computer Science, Universitas Pembangunan Panca Budi, Medan, Indonesia
2Ph.D. Student of School of Computer and Communication Engineering, Universiti Malaysia Perlis, Kangar, Malaysia
Abstract — Compression is an activity performed to reduce its size into smaller than earlier.
Compression is created since lack of adequate storage capacity. Data compression is also needed to
speed up data transmission activity between computer networks. Compression has the different rule
between speed and density. Compressed compression will take longer than compression that relies on
speed. Elias Delta is one of the lossless compression techniques that can compress the characters. This
compression is created based on the frequency of the character of a character on a document to be
compressed. It works based on bit deductions on seven or eight bits. The most common characters will
have the least number of bits, while the fewest characters will have the longest number of bits. The
formation of character sets serves to eliminate double characters in the calculation of the number of
each character as well as for the compression table storage. It has a good level of comparison between
before and after compression. The speed of compression and decompression process possessed by this
method is outstanding and fast.
Keywords — Compression, Elias Delta
I. INTRODUCTION
The development of information at this time trigger the creators of storage media to develop
technology on a harddisk, flash, CD, and others. It is due to the lack of capacity provided by certain
media. As well as dual-screen DVDs that can only hold data of approximately 9 GB. Thus, created a
new storage media such as blue ray disk that can store data larger than 30 GB. A common problem
with data compression is how to optimally compress the data, especially text data that does not have a
header file. It aims to obtain the compression results have a size that is much smaller than the original
size. However, to make the compression process is not easy. Compression must be done correctly to
the original information. If there is an error in the preparation of the bit sequence, then during the
decompression process will fail. To support all that, it has to be seen the kind of compression that is
suitable to apply to certain data.
Compression has two types, lossy and lossless techniques [2]. Lossy compression is a non-
returnable compression to all forms. The discharges that occur in this compression can not return to
the original data or in other words; this compression process has no decompression. Processes like this
are often done to shrink the size of video, audio, images. This compression is done because of
constrained storage media, especially on mobile phones or constrained in mobile phone resolution to
display or listen to the media. The second technique is lossless compression. It is a common data
compression for archiving where compression results can be returned to the decompression process to
get the original data. The Elias Delta method is a lossless compression in the process of compacting
the data. By applying this method to shrink the data, it is expected that the burden of storage media
can be reduced.
2. International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 08; August - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 211
II. THEORIES
2.1 Compression
The rapid development of information requires that technology in all fields should flourish as
well. It continually seeks to evolve to meet the needs of the community for information that can be
quickly and easily accessed. To do this requires a large database. It deals with storage media. It also
relates to data access process. If the data has a large capacity, it will slow down the access speed and
reduce the capacity of the storage media quickly [7][8]. This problem makes developers of information
technology perform data compression. The original data will be converted into smaller data but does
not reduce the value contained in the information.
Compression is a technique of information reduction so that new data is obtained with a size
smaller than the size before it is compressed [8]. It can be done on some types of data, video, images,
audio, and others. The data in the image is a file with BMP, JPG, PNG, TIFF, BMP extension. Audio
data is in the form of MP3, AAC, RMA, WMA and video in the form of MP4, WMV, AVI and so on.
The compression process is the process of encoding information using bits or other information-
bearing units lower than the data representation that is not encoded in the encoding system [3].
The compression technique has two ways:
Lossy Compression. This technique is used to degrade the quality of the original media.
Compressed data will remain equally visible or enjoyed like the original but with lower quality. It
is more often used for streaming media, audio, images and video. The resulting file size will be
smaller than the original but still be eligible to apply. This process works by removing unused
portions of data. These data are not necessary for the sight of the human eye because it is not
directly related to the user. So when the data has been compressed, the data can still be used even
though it is not like the original. For example, a video that has a full HD 1920 x 1650 resolution.
For this size, it will affect the speed at the time of video playback on the old computer, so that
requires the reduction of resolution to HD 1280 x 720 or VGA 640 x 480.
Lossless Compression. This compression is often done to save data storage or to combine some
data into one data so as not to be scattered between one another. It is often done when sending
emails with many attachments that minimize the attachment process. The compressed data can be
returned to all forms. The result of the decompression process is no different from the original. An
example can be seen in the WinRAR app. File compression results in the form of ZIP, RAR, GZIP,
7-Zip and so forth. The images, audio, video files can also be compressed with this method just
like on lossy compression. However, the purpose is different, not to lower the quality or resolution,
but to perform data archiving. The file size after the compression process is not necessarily smaller
than the original file if the data is saved optimally.
2.2 Elias Delta
Peter Elias invented Elias Delta. This code applies methods such as Elias Gamma, especially
in the head [1][4]. The technique is as follows:
Find the highest binary rank, for example, 11 which has a binary value of 1011 where the highest
rank is 3. So N '= 3.
Use Gamma Coding to encode N numbers where N = N '+ 1. So for the case of decimal 11 then
we have to make a Gamma Coding of 4 that is 00100.
Add the remaining N 'binary in result no.2. So obtained answer 00100011.
3. International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 08; August - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 212
Next is the Elias Delta Coding method, the principle is the opposite of steps one through three
above. Suppose we will decode 00100011.
Find the number of zeros before it finds the number one, which is 00, amounts to two. Means there
are (2 + 1) numbers to watch after these two zeros are 100 which in decimals mean 4, so we get N
'with N-1 = 4-1 = 3.
If N 'is known, i.e. 3 then three remaining bits are part of that number i.e. 011. So obtained answer
1011 which means 11.
Table 1. Elias Delta encoding system
Number N N+1 Encoding Probability
1 = 20
0 1 1 1/2
2 = 21
+ 0 1 2 0 1 0 0 1/16
3 = 21
+ 1 1 2 0 1 0 1 1/16
4 = 22
+ 0 2 3 0 1 1 00 1/32
5 = 22
+ 1 2 3 0 1 1 01 1/32
6 = 22
+ 2 2 3 0 1 1 10 1/32
7 = 22
+ 3 2 3 0 1 1 11 1/32
8 = 23
+ 0 3 4 00 1 00 000 1/256
9 = 23
+ 1 3 4 00 1 00 001 1/256
10 = 23
+ 2 3 4 00 1 00 010 1/256
11 = 23
+ 3 3 4 00 1 00 011 1/256
12 = 23
+ 4 3 4 00 1 00 100 1/256
13 = 23
+ 5 3 4 00 1 00 101 1/256
14 = 23
+ 6 3 4 00 1 00 110 1/256
15 = 23
+ 7 3 4 00 1 00 111 1/256
16 = 24
+ 0 4 5 00 1 01 0000 1/512
17 = 24
+ 1 4 5 00 1 01 0001 1/512
18 = 24
+ 2 4 5 00 1 01 0010 1/512
Table 1 shows the encoding system of Elias Delta [5][6]. As an understanding, it can be seen
an example, if n = 18 produces the code delta 001010010. It is obtained from the following
calculations:
Take the closest the power of two, 24
= 16
Add the power value 4 and 1 = 5 so that the number of characters in the first part is 00101
The rest of the earlier power is 18 - 16 = 2
Add the beginning part with bit value as much as four characters = 0010
The result of the Elias Delta Code is 001010010
4. International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 08; August - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 213
III. RESULT AND DISCUSSION
At this stage, compression testing is performed for a simple string. It is "THE QUICK
BROWN FOX JUMPS OVER THE LAZY DOG." Stages in this section include:
Calculate the length of the text
Create and calculate the length of the character set
Calculating bit length
Specifies padding bits if necessary
Table 2 shows the original string before the compression process. The string has 43 characters.
Tabel 2. Original string
No. Char ASCII Binary
1 T 84 01010100
2 H 72 01001000
3 E 69 01000101
4 32 00100000
5 Q 81 01010001
6 U 85 01010101
7 I 73 01001001
8 C 67 01000011
9 K 75 01001011
10 32 00100000
11 B 66 01000010
12 R 82 01010010
13 O 79 01001111
14 W 87 01010111
15 N 78 01001110
16 32 00100000
17 F 70 01000110
18 O 79 01001111
19 X 88 01011000
20 32 00100000
21 J 74 01001010
22 U 85 01010101
23 M 77 01001101
24 P 80 01010000
25 S 83 01010011
26 32 00100000
27 O 79 01001111
28 V 86 01010110
29 E 69 01000101
30 R 82 01010010
31 32 00100000
32 T 84 01010100
33 H 72 01001000
5. International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 08; August - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 214
34 E 69 01000101
35 32 00100000
36 L 76 01001100
37 A 65 01000001
38 Z 90 01011010
39 Y 89 01011001
40 32 00100000
41 D 68 01000100
42 O 79 01001111
43 G 71 01000111
The next stage is the formation of character sets. Repeated characters will be eliminated so that
only single characters live. After this process, there are only 27 characters left with different frequency
of occurrences. The number of bits in this string is 344. The result of the character set process can be
seen in Table 3.
Table 3. Elias Delta characters set
No. Char ASCII Binary Freq. Bits
1 T 84 01010100 2 16
2 H 72 01001000 2 16
3 E 69 01000101 3 24
4 32 00100000 8 64
5 Q 81 01010001 1 8
6 U 85 01010101 2 16
7 I 73 01001001 1 8
8 C 67 01000011 1 8
9 K 75 01001011 1 8
10 B 66 01000010 1 8
11 R 82 01010010 2 16
12 O 79 01001111 4 32
13 W 87 01010111 1 8
14 N 78 01001110 1 8
15 F 70 01000110 1 8
16 X 88 01011000 1 8
17 J 74 01001010 1 8
18 M 77 01001101 1 8
19 P 80 01010000 1 8
20 S 83 01010011 1 8
21 V 86 01010110 1 8
22 L 76 01001100 1 8
23 A 65 01000001 1 8
24 Z 90 01011010 1 8
25 Y 89 01011001 1 8
26 D 68 01000100 1 8
27 G 71 01000111 1 8
344
6. International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 08; August - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 215
According to the arrangement of bits of the characters in the Elias Delta table, the result of
compression of the previous string is 248 as seen in Tabel 4. This result is obtained based on the use
of bits according to the frequency of appearance of the characters. The space character is the one that
most often appears in the string; There are eight occasions. There is a character that appears three and
four times. Four characters appear twice and 20 characters each appearing just once.
Table 4. Elias Delta result
No. Char Freq. Binary Length Bits
1 8 1 1 8
2 O 4 0100 4 16
3 E 3 0101 4 12
4 H 2 01100 5 10
5 U 2 01101 5 10
6 R 2 01110 5 10
7 T 2 01111 5 10
8 C 1 00100000 8 8
9 K 1 00100001 8 8
10 B 1 00100010 8 8
11 Q 1 00100011 8 8
12 I 1 00100100 8 8
13 W 1 00100101 8 8
14 N 1 00100110 8 8
15 F 1 00100111 8 8
16 X 1 001010000 9 9
17 J 1 001010001 9 9
18 M 1 001010010 9 9
19 P 1 001010011 9 9
20 S 1 001010100 9 9
21 V 1 001010101 9 9
22 L 1 001010110 9 9
23 A 1 001010111 9 9
24 Z 1 001011000 9 9
25 Y 1 001011001 9 9
26 D 1 001011010 9 9
27 G 1 001011011 9 9
248
Bit Sequence:
011110110001011001000110110100100100001000000010000110010001001110010000100101001
001101001001110100001010000100101000101101001010010001010011001010100101000010101
010101011101011110110001011001010110001010111001011000001011001100101101001000010
1101100110000
7. International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 08; August - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 216
TB = 248
TC =
𝑇𝐵
8
=
248
8
= 31
Padding = 𝑇𝐵 𝑚𝑜𝑑 8
= 248 𝑚𝑜𝑑 8
= 0
Total Bits = 248 + 8
= 256
The bold character at the end of the bit sequence is the number of padding bits. For the above
calculation, it is obtained that Padding = 0. Character "0" is on the order of 48 in ASCII code which
after converted to binary will produce 00110000. This binary number will be added to the compression
result, so the binary number becomes 248 + 8 = 256 bits. After the 256 bit is converted to a sequence
of characters, this will generate the string {? FÒB ????????? N ?? E ¥ "?? R UuìYX®X, ËH [0 as
many as 32 characters.
Compression Ratio =
𝑎𝑓𝑡𝑒𝑟 𝑐𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛
𝑏𝑒𝑓𝑜𝑟𝑒 𝑐𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛
∗ 100%
=
256
344
∗ 100%
= 74.41860465116279%
Redudancy =
𝑏𝑒𝑓𝑜𝑟𝑒 −𝑎𝑓𝑡𝑒𝑟 𝑐𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛
𝑏𝑒𝑓𝑜𝑟𝑒 𝑐𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛
∗ 100%
=
344−256
344
∗ 100%
=
88
344
∗ 100%
= 25.58139534883721%
The compression process has saved data of 25.58139534883721% of the original data. The
savings rate depends on the order and character pattern of the original message.
IV. CONCLUSION
The above calculations prove that the compression made by Elias Delta has been very good at
saving storage capacity. The Elias Delta method can compress to all data types. The data will be
converted into rows of character sets and then sorted by the frequency of appearance of characters.
The most frequently displayed characters will vary by a document. The language also determines the
number of most frequently used characters. Application of this compression is very well done to save
storage media and speed up the process of sending data on a computer network. The downside of this
compression is the decompression process. This compression requires additional tables to match the
order of characters that often appear. Returning characters will take a long time because it must match
the pieces of bits one by one as much as the data stored in the compression table.
REFERENCES
[1] R. T. Handayanto, “Elias Gamma & Delta Coding,” 15 September 2014. [Online]. Available:
https://rahmadya.com/2014/09/15/elias-gamma-delta-coding/. [Diakses 24 August 2017].
[2] S. D. Nasution dan Mesran, “Goldbach Codes Algorithm for Text Compression,” International Journal of Software
& Hardware Research in Engineering, vol. 4, no. 11, pp. 43-46, 2016.
8. International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 08; August - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 217
[3] Ameliachy, “Teknik Kompresi Data,” 20 April 2012. [Online]. Available: https://ameliachy.wordpress.com/
2012/04/20/teknik-kompresi-data/. [Diakses 24 August 2017].
[4] Antoni, E. B. Nababan dan M. Zarlis, “Results Analysis of Text Data Compression on Elias Gamma Code, Elias Delta
Code and Levenstein Code,” International Journal of Science and Advanced Technology, vol. 4, no. 9, pp. 17-24,
2014.
[5] Wikipedia, “Elias Delta Coding,” [Online]. Available: https://en.wikipedia.org/wiki/Elias_delta_coding.
[6] P. Elias, “Universal Codeword Sets and Representations of the Integers,” IEEE Transactions on Information Theory,
vol. 21, no. 2, p. 194–203, 1975.
[7] Suherman dan A. P. U. Siahaan, “Huffman Text Compression Technique,” SSRG International Journal of Computer
Science and Enginee ring, vol. 3, no. 8, pp. 103-108, 2016.
[8] R. Aarthi, D. Muralidharan dan P. Swanminathan, “Doubel Compression of Test Data Using Huffman Code,” Journal
of Theoretical and Applied Information Technology, vol. 39, no. 2, pp. 104-113, 2012.