This document discusses a lossless column-oriented compression technique using Huffman coding. It begins by explaining that column-oriented data compression is more efficient than row-oriented compression because values within the same attribute are more correlated. It then proposes compressing and decompressing column-oriented data images using the Huffman coding technique. Finally, it implements a software algorithm to compress and decompress column-oriented databases using Huffman coding in MATLAB.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
This document summarizes a research paper that proposes a new lossless image compression algorithm called Pixel Size Reduction (PSR). The PSR algorithm achieves compression by representing pixels using the minimum number of bits needed based on their frequency of occurrence in the image, rather than a fixed 8 bits per pixel. Experimental results on test images showed that the PSR algorithm achieved better compression ratios than other lossless compression methods like Huffman, TIFF, GPPM, and PCX. The paper compares the compressed file sizes of the PSR algorithm to these other methods on various synthetic images.
AVC based Compression of Compound Images Using Block Classification SchemeDR.P.S.JAGADEESH KUMAR
The document discusses a proposed method for compressing compound images using block classification and different compression schemes for different block types. The method classifies blocks of a compound image as background, text, hybrid, or picture blocks using a histogram-based approach. Different compression algorithms are then applied to different block types, including run-length encoding for background blocks, wavelet coding for text blocks, H.264 AVC with CABAC entropy coding for hybrid blocks, and JPEG coding for picture blocks. Experimental results showed that this block classification and compression scheme approach improved the compression ratio for compound images over single compression methods but increased computational complexity.
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
This document proposes and evaluates a near lossless image compression algorithm that divides color images into red, green, and blue channels. It classifies pixels in each channel row-by-row and records the results in mask images. The image data is then decomposed into sequences based on the classification and the mask images are hidden in the least significant bits of the sequences. Different encoding schemes like LZW, Huffman, and RLE are applied and compared. Experimental results on test images show the proposed algorithm achieves smaller bits per pixel than simple encoding schemes. PSNR values also indicate very little difference between original and reconstructed images.
Compression can be defined as an art form that involves the representation of information in a reduced form when compared to the original information. Image compression is extremely important in this day and age because of the increased demand for sharing and storing multimedia data. Compression is concerned with removing redundant or superfluous information from a file to reduce the size of the file. The reduction of the file size saves both memory and the time required to transmit and store data. Lossless compression techniques are distinguished from lossy compression techniques, which are distinguished from one another. This paper focuses on the literature studies on various compression techniques and the comparisons between them.
A mathematical model and a heuristic memory allocation problemDiego Montero
Effective memory management in embedded systems reduce running time and power consumption. Memory allocation is complicated by limited capacity and number of memory banks, as well as potential runtime conflicts. We approached the optimization of memory allocation problem through exact solution using ILP and Tabu Search heauristic method. Inputs from DIMACs instances were tested and the results show significant performance difference between the two approaches
Types of Data compression, Lossy Compression, Lossless compression and many more. How data is compressed etc. A little extensive than CIE O level Syllabus
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
This document summarizes a research paper that proposes a new lossless image compression algorithm called Pixel Size Reduction (PSR). The PSR algorithm achieves compression by representing pixels using the minimum number of bits needed based on their frequency of occurrence in the image, rather than a fixed 8 bits per pixel. Experimental results on test images showed that the PSR algorithm achieved better compression ratios than other lossless compression methods like Huffman, TIFF, GPPM, and PCX. The paper compares the compressed file sizes of the PSR algorithm to these other methods on various synthetic images.
AVC based Compression of Compound Images Using Block Classification SchemeDR.P.S.JAGADEESH KUMAR
The document discusses a proposed method for compressing compound images using block classification and different compression schemes for different block types. The method classifies blocks of a compound image as background, text, hybrid, or picture blocks using a histogram-based approach. Different compression algorithms are then applied to different block types, including run-length encoding for background blocks, wavelet coding for text blocks, H.264 AVC with CABAC entropy coding for hybrid blocks, and JPEG coding for picture blocks. Experimental results showed that this block classification and compression scheme approach improved the compression ratio for compound images over single compression methods but increased computational complexity.
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
This document proposes and evaluates a near lossless image compression algorithm that divides color images into red, green, and blue channels. It classifies pixels in each channel row-by-row and records the results in mask images. The image data is then decomposed into sequences based on the classification and the mask images are hidden in the least significant bits of the sequences. Different encoding schemes like LZW, Huffman, and RLE are applied and compared. Experimental results on test images show the proposed algorithm achieves smaller bits per pixel than simple encoding schemes. PSNR values also indicate very little difference between original and reconstructed images.
Compression can be defined as an art form that involves the representation of information in a reduced form when compared to the original information. Image compression is extremely important in this day and age because of the increased demand for sharing and storing multimedia data. Compression is concerned with removing redundant or superfluous information from a file to reduce the size of the file. The reduction of the file size saves both memory and the time required to transmit and store data. Lossless compression techniques are distinguished from lossy compression techniques, which are distinguished from one another. This paper focuses on the literature studies on various compression techniques and the comparisons between them.
A mathematical model and a heuristic memory allocation problemDiego Montero
Effective memory management in embedded systems reduce running time and power consumption. Memory allocation is complicated by limited capacity and number of memory banks, as well as potential runtime conflicts. We approached the optimization of memory allocation problem through exact solution using ILP and Tabu Search heauristic method. Inputs from DIMACs instances were tested and the results show significant performance difference between the two approaches
Types of Data compression, Lossy Compression, Lossless compression and many more. How data is compressed etc. A little extensive than CIE O level Syllabus
This document provides an introduction to data compression. It defines data compression as converting an input data stream into a smaller output stream. Data compression is popular because it allows for more data storage and faster data transfers. The document then discusses key concepts in data compression including lossy vs. lossless compression, adaptive vs. non-adaptive methods, compression performance metrics, and probability models. It also introduces several standard corpora used to test compression algorithms.
This document summarizes JPEG image compression techniques. It discusses how images are divided into blocks and transformed from the spatial domain to the frequency domain using the Discrete Cosine Transform (DCT). It then describes how the DCT coefficients are quantized and arranged in zigzag order before entropy encoding with Huffman coding. The goal of JPEG compression is to store image data using as little space as possible while maintaining enough visual detail. The techniques discussed aim to remove irrelevant and redundant image data through DCT, quantization, and entropy encoding.
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
3 d discrete cosine transform for image compressionAlexander Decker
1. The document discusses 3D discrete cosine transform (DCT) for image compression. 3D-DCT takes a sequence of frames and divides them into 8x8x8 cubes.
2. Each cube is independently encoded using 3D-DCT, quantization, and entropy encoding. This concentrates information in low frequencies.
3. The technique achieves a compression ratio of around 27 for a set of 8 frames, better than 2D JPEG. By exploiting correlations in both spatial and temporal dimensions, 3D-DCT allows for improved compression over 2D transforms.
The document discusses the performance implications of two types of processing-in-memory (PIM) designs - fixed-functional PIM and programmable PIM - on data-intensive applications. It explores these implications through three benchmarks, including a real data analytics application involving gradient computation. The results show that PIMs provide speedups ranging from 2.09x to 91.4x over non-PIM designs. However, fixed-functional and programmable PIMs perform differently across applications, with up to a 90% performance difference. Neither PIM type is optimal for all cases. The best choice depends on workload and PIM characteristics as well as PIM overhead.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses test data compression techniques for system-on-chip designs. It presents the XMatchPro algorithm that combines dictionary-based and bit mask-based compression to significantly reduce testing time and memory requirements. The algorithm was applied to benchmarks and achieved a 92% compression efficiency while improving decompression efficiency by up to 90% compared to other techniques without additional overhead. International lossless compression techniques are also overviewed, including dictionary-based Lempel-Ziv methods that have been implemented in hardware using systolic arrays or content addressable memories.
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
With the onslaught of multimedia in the near past, there has been a tremendous increase in the uses of images. A very good example of which is the web on which most of the documents contain images. Other than this the images are being used in other applications like weather forecasting, medical diagnosis, police department. In R-Tree implementation of image database, images are made available to the program which are then stores in the database. The image database is presented using R-tree and the database is stored in separate file .This R-tree implementation results in both update as well as efficient retrieval of images from hard disk [1][2][4]. We use the similarity based retrieval feature to retrieve the required number of similar images being inquired by the user [3][5][6]. Distance matrix approach is used to find similarity of images [7]. Sobel edge detection algorithm is used to form sketches. If sketch of image is entered for similarity based retrieval, then sketches of stored images are formed and these sketches are compared with input image (sketch) using distance matrix approach[8][9].
This document compares the performance of three lossless image compression techniques: Run Length Encoding (RLE), Delta encoding, and Huffman encoding. It tests these algorithms on binary, grayscale, and RGB images to evaluate compression ratio, storage savings percentage, and compression time. The results found that Delta encoding achieved the highest compression ratio and storage savings, while Huffman encoding had the fastest compression time. In general, the document evaluates and compares the performance of different lossless image compression algorithms.
Data Hiding Using Reversibly Designed Difference-Pair MethodIJERA Editor
This document presents a reversible data hiding technique called the difference-pair method. The technique embeds data into digital images by modifying pixel values in a way that allows perfect recovery of the original image. It aims to increase the embedding capacity compared to previous related work. The proposed method allows modification of either the first or second pixel in a pixel-pair, providing four possible modification directions rather than just two as in prior work. This increased flexibility in pixel modifications can boost data hiding capacity while maintaining reversibility and image quality. The technique is evaluated by comparing results to existing reversible data hiding schemes.
Project Report on Medical Image Compression submitted for the award of B.Tech degree in Electrical and Electronics Engineering by Paras Prateek Bhatnagar, Paramjeet Singh Jamwal, Preeti Kumari and Nisha Rajani during session 2010-11.
Radical Data Compression Algorithm Using FactorizationCSCJournals
This work deals with encoding algorithm that conveys a message that generates a “compressed” form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
With the enhance in the digital media, modification
and transfer of information is very easy. So this
work focus on transferring data by hiding in the
image. Here a robust approach is achieved by using
the skew tent map as an encryption/ decryption
algorithm at the sender and receiver side. In this
work image is transformed into inverse S-order as
the initial step of the work so little confusion can be
created for the intruder. Here whole data hiding is
done by modifying by using the modified histogram
shifting method. This approach was utilized to the
point that hiding information and image can be
effectively recovered with no information loss. An
investigation is done on the genuine dataset image.
Assessment parameter esteems and demonstrates
that the proposed work has kept up the SNR, PSNR,
Throughput, Data Hiding Execution Time and
Extraction Time values with high security of the
information.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
This document describes Dremel, an interactive query system for analyzing large nested datasets. Dremel uses a multi-level execution tree to parallelize queries across thousands of CPUs. It stores nested data in a novel columnar format that improves performance by only reading relevant columns from storage. Dremel has been in production at Google since 2006 and is used by thousands of users to interactively analyze datasets containing trillions of records.
Scanned document compression using block based hybrid video codecMuthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
The document discusses file compression using the Huffman coding algorithm. It begins with an abstract describing how file compression reduces file size to allow for faster transmission and less storage usage. It then discusses that the main algorithm used is Huffman coding. The rest of the document provides details on the Huffman coding algorithm, including that it assigns variable-length codes to symbols based on their probability, resulting in more common symbols having shorter codes, and that Huffman coding generates the most efficient compression method of this type. It also notes some limitations of Huffman coding compared to other methods.
The document discusses different techniques for compressing multimedia data such as text, images, audio and video. It describes how compression works by removing redundancy in digital data and exploiting properties of human perception. It then explains different compression methods including lossless compression, lossy compression, entropy encoding, and specific algorithms like Huffman encoding and arithmetic coding. The goal of compression is to reduce the size of files to reduce storage and bandwidth requirements for transmission.
This document provides an introduction to data compression. It defines data compression as converting an input data stream into a smaller output stream. Data compression is popular because it allows for more data storage and faster data transfers. The document then discusses key concepts in data compression including lossy vs. lossless compression, adaptive vs. non-adaptive methods, compression performance metrics, and probability models. It also introduces several standard corpora used to test compression algorithms.
This document summarizes JPEG image compression techniques. It discusses how images are divided into blocks and transformed from the spatial domain to the frequency domain using the Discrete Cosine Transform (DCT). It then describes how the DCT coefficients are quantized and arranged in zigzag order before entropy encoding with Huffman coding. The goal of JPEG compression is to store image data using as little space as possible while maintaining enough visual detail. The techniques discussed aim to remove irrelevant and redundant image data through DCT, quantization, and entropy encoding.
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
3 d discrete cosine transform for image compressionAlexander Decker
1. The document discusses 3D discrete cosine transform (DCT) for image compression. 3D-DCT takes a sequence of frames and divides them into 8x8x8 cubes.
2. Each cube is independently encoded using 3D-DCT, quantization, and entropy encoding. This concentrates information in low frequencies.
3. The technique achieves a compression ratio of around 27 for a set of 8 frames, better than 2D JPEG. By exploiting correlations in both spatial and temporal dimensions, 3D-DCT allows for improved compression over 2D transforms.
The document discusses the performance implications of two types of processing-in-memory (PIM) designs - fixed-functional PIM and programmable PIM - on data-intensive applications. It explores these implications through three benchmarks, including a real data analytics application involving gradient computation. The results show that PIMs provide speedups ranging from 2.09x to 91.4x over non-PIM designs. However, fixed-functional and programmable PIMs perform differently across applications, with up to a 90% performance difference. Neither PIM type is optimal for all cases. The best choice depends on workload and PIM characteristics as well as PIM overhead.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses test data compression techniques for system-on-chip designs. It presents the XMatchPro algorithm that combines dictionary-based and bit mask-based compression to significantly reduce testing time and memory requirements. The algorithm was applied to benchmarks and achieved a 92% compression efficiency while improving decompression efficiency by up to 90% compared to other techniques without additional overhead. International lossless compression techniques are also overviewed, including dictionary-based Lempel-Ziv methods that have been implemented in hardware using systolic arrays or content addressable memories.
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
With the onslaught of multimedia in the near past, there has been a tremendous increase in the uses of images. A very good example of which is the web on which most of the documents contain images. Other than this the images are being used in other applications like weather forecasting, medical diagnosis, police department. In R-Tree implementation of image database, images are made available to the program which are then stores in the database. The image database is presented using R-tree and the database is stored in separate file .This R-tree implementation results in both update as well as efficient retrieval of images from hard disk [1][2][4]. We use the similarity based retrieval feature to retrieve the required number of similar images being inquired by the user [3][5][6]. Distance matrix approach is used to find similarity of images [7]. Sobel edge detection algorithm is used to form sketches. If sketch of image is entered for similarity based retrieval, then sketches of stored images are formed and these sketches are compared with input image (sketch) using distance matrix approach[8][9].
This document compares the performance of three lossless image compression techniques: Run Length Encoding (RLE), Delta encoding, and Huffman encoding. It tests these algorithms on binary, grayscale, and RGB images to evaluate compression ratio, storage savings percentage, and compression time. The results found that Delta encoding achieved the highest compression ratio and storage savings, while Huffman encoding had the fastest compression time. In general, the document evaluates and compares the performance of different lossless image compression algorithms.
Data Hiding Using Reversibly Designed Difference-Pair MethodIJERA Editor
This document presents a reversible data hiding technique called the difference-pair method. The technique embeds data into digital images by modifying pixel values in a way that allows perfect recovery of the original image. It aims to increase the embedding capacity compared to previous related work. The proposed method allows modification of either the first or second pixel in a pixel-pair, providing four possible modification directions rather than just two as in prior work. This increased flexibility in pixel modifications can boost data hiding capacity while maintaining reversibility and image quality. The technique is evaluated by comparing results to existing reversible data hiding schemes.
Project Report on Medical Image Compression submitted for the award of B.Tech degree in Electrical and Electronics Engineering by Paras Prateek Bhatnagar, Paramjeet Singh Jamwal, Preeti Kumari and Nisha Rajani during session 2010-11.
Radical Data Compression Algorithm Using FactorizationCSCJournals
This work deals with encoding algorithm that conveys a message that generates a “compressed” form with fewer characters only understood through decoding the encoded data which reconstructs the original message. The proposed factorization techniques in conjunction with lossless method were adopted in compression and decompression of data for exploiting the size of memory, thereby decreasing the cost of the communications. The proposed algorithms shade the data from the eyes of the cryptanalysts during the data storage or transmission.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
With the enhance in the digital media, modification
and transfer of information is very easy. So this
work focus on transferring data by hiding in the
image. Here a robust approach is achieved by using
the skew tent map as an encryption/ decryption
algorithm at the sender and receiver side. In this
work image is transformed into inverse S-order as
the initial step of the work so little confusion can be
created for the intruder. Here whole data hiding is
done by modifying by using the modified histogram
shifting method. This approach was utilized to the
point that hiding information and image can be
effectively recovered with no information loss. An
investigation is done on the genuine dataset image.
Assessment parameter esteems and demonstrates
that the proposed work has kept up the SNR, PSNR,
Throughput, Data Hiding Execution Time and
Extraction Time values with high security of the
information.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
This document describes Dremel, an interactive query system for analyzing large nested datasets. Dremel uses a multi-level execution tree to parallelize queries across thousands of CPUs. It stores nested data in a novel columnar format that improves performance by only reading relevant columns from storage. Dremel has been in production at Google since 2006 and is used by thousands of users to interactively analyze datasets containing trillions of records.
Scanned document compression using block based hybrid video codecMuthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
The document discusses file compression using the Huffman coding algorithm. It begins with an abstract describing how file compression reduces file size to allow for faster transmission and less storage usage. It then discusses that the main algorithm used is Huffman coding. The rest of the document provides details on the Huffman coding algorithm, including that it assigns variable-length codes to symbols based on their probability, resulting in more common symbols having shorter codes, and that Huffman coding generates the most efficient compression method of this type. It also notes some limitations of Huffman coding compared to other methods.
The document discusses different techniques for compressing multimedia data such as text, images, audio and video. It describes how compression works by removing redundancy in digital data and exploiting properties of human perception. It then explains different compression methods including lossless compression, lossy compression, entropy encoding, and specific algorithms like Huffman encoding and arithmetic coding. The goal of compression is to reduce the size of files to reduce storage and bandwidth requirements for transmission.
This document discusses Huffman coding, which is a technique used to compress files for transmission by assigning variable length codes to characters based on their frequency of occurrence. It provides details on how a Huffman tree is constructed by first counting character frequencies, prioritizing characters by frequency, and building the tree by combining the lowest frequency nodes. The tree is then traversed to determine the code words for each character. When encoding a file, the characters are replaced with their code words to create a more compact representation. Decoding works by using the same tree to map the code words back to characters. Huffman coding results in significant data compression by assigning shorter codes to more common characters.
Huffman coding is an algorithm that uses variable-length binary codes to compress data. It assigns shorter codes to more frequent symbols and longer codes to less frequent symbols. The algorithm constructs a binary tree from the frequency of symbols and extracts the Huffman codes from the tree. Huffman coding is widely used in applications like ZIP files, JPEG images, and MPEG videos to reduce file sizes for efficient transmission or storage.
This document discusses various topics related to data compression including compression techniques, audio compression, video compression, and standards like MPEG and JPEG. It covers lossless versus lossy compression, explaining that lossy compression can achieve much higher levels of compression but results in some loss of quality, while lossless compression maintains the original quality. The advantages of data compression include reducing file sizes, saving storage space and bandwidth.
Effects of Weight Approximation Methods on Performance of Digital Beamforming...IOSR Journals
This document discusses the effects of weight approximation methods on the performance of digital beamforming using the least mean squares (LMS) algorithm. It compares the performance of two proposed weight approximation algorithms - minimum modulus method and motion corroboration method - to conventional 0-floor and 0.5-floor methods. The proposed algorithms provide better beampattern, lower sidelobe levels, and slightly faster convergence compared to conventional methods, though with increased computational cost. It also examines the effect of the LMS convergence coefficient μ on sidelobe levels, finding an optimal μ value that minimizes sidelobes for each approximation method.
Remedyto the Shading Effect on Photovoltaic CellIOSR Journals
This document discusses remedies for the shading effect on photovoltaic cells. It proposes connecting bypass diodes parallel to solar cells such that when shading occurs, the reverse voltage enables the bypass diode to conduct current from the unshaded cells. This results in the current from the unshaded cells flowing through the bypass diode, showing a second local maximum on the power/voltage characteristics. The shaded cell is only loaded with power from the unshaded cells in that section. The document then provides details on sizing the components of a photovoltaic system based on load assessment, including selecting a 1.5 kVA inverter, a 24V 400Ah battery bank, and determining the required solar panel size
Web-Based System for Software Requirements Quality Analysis Using Case-Based ...IOSR Journals
This document proposes a web-based system to analyze the quality of software requirements specifications (SRS) using case-based reasoning (CBR) and artificial neural networks (ANN). CBR solves new problems by comparing them to past, stored cases, but this can be inefficient when the case base is large. The proposed system improves the retrieval phase of CBR by using ANN to more efficiently measure the similarity between a new case and existing cases. This results in a web-based system that allows users to input SRS quality attributes and indicators, analyzes the SRS using CBR integrated with ANN, and presents a quality analysis report. The system is intended to help software developers better understand SRS quality and requirements.
A wide range of business managers, HR professionals, and consultants share great ideas, tips and strategies to find, keep and manage employees. Visit: http://hrwisdom.com.au
Focused Exploration of Geospatial Context on Linked Open DataThomas Gottron
Talk at IESD 2014 workshop in Riva del Garda (at ISWC).
Abstract The Linked Open Data cloud provides a wide range of different types of information which are interlinked and connected. When a user or application is interested in specific types of information under time constraints it is best to ex- plore this vast knowledge network in a focused and directed way. In this paper we address the novel task of focused exploration of Linked Open Data for geospatial resources, helping journalists in real-time during breaking news stories to find contextual geospatial information related to geoparsed content. After formalising the task of focused exploration, we present and evaluate five approaches based on three different paradigms. Our results on a dataset with 425,338 entities show that focused exploration on the Linked Data cloud is feasible and can be implemented at very high levels of accuracy of more than 98%.
Pillows and chairs can be used for both relaxation and exercise or play. Pillows are commonly used for sleeping or leaning but can also be used for pillow fights with friends. Chairs can be sat on but also used for balancing exercises by dancing on them or testing their quality, and chairs and pillows arranged together can make a cute scene.
The NEW New West Economic Forum: Dr. Nigel Murray, Panel Speakerinvestnewwest
Royal Columbian Hospital in New Westminster is undergoing redevelopment and expansion to update its 428-bed facility. The project will help the hospital serve the growing population of British Columbia and provide world-class healthcare. In the interim, the hospital is increasing services like a new multipurpose interventional suite to provide better care. The redevelopment envisions creating an academic campus that enhances the community's quality of life through healthcare and education.
Towards Reliable Systems with User Action Tolerance and RecoveryIOSR Journals
This document discusses mechanisms to improve operating system reliability by making it tolerant to certain user actions and enabling recovery of resources like files. It presents two kernel module drivers called the stringent and strategic tolerance drivers. The stringent driver protects important files by intercepting deletion actions and blocking the deletion. The strategic driver backs up important files on startup and recovers any missing files from the backup on shutdown. Evaluation shows the drivers can tolerate accidental user deletions with minimal performance overhead and improve system reliability. The drivers were implemented as character drivers on the Linux kernel.
A Hybrid Technique for Shape Matching Based on chain code and DFS TreeIOSR Journals
This document proposes a hybrid technique for shape matching that combines chain code and depth-first search (DFS) tree methods. Chain code is used to detect boundaries and represent shapes, but can result in long codes that reduce accuracy. DFS is applied to break long chain codes into smaller subgraphs, producing more compact patterns that improve matching performance. The technique is tested on 500 images from different categories, achieving higher precision and recall than chain code alone, demonstrating the effectiveness of the hybrid approach.
This document describes a study that aimed to develop and characterize an adsorbent from rice husk ash to bleach vegetable oils such as palm, palm kernel, and groundnut oils. Rice husk samples were pretreated with different concentrations of HCl and then calcined at 600°C for 3 hours. The optimum conditions were determined to be pretreatment with 2.5M HCl and calcination at 600°C for 3 hours. Under these conditions, the rice husk ash showed the best bleaching potential for palm kernel and palm oils with 2.5M HCl and for groundnut oil with 2M HCl. Characterization of the rice husk ash samples found that acid pretreatment improved the bleaching
Image Compression Through Combination Advantages From Existing TechniquesCSCJournals
The tremendous growth of digital data has led to a high necessity for compressing applications either to minimize memory usage or transmission speed. Despite of the fact that many techniques already exist, there is still space and need for new techniques in this area of study. With this paper we aim to introduce a new technique for data compression through pixel combinations, used for both lossless and lossy compression. This new technique is also able to be used as a standalone solution, or with some other data compression method as an add-on providing better results. It is here applied only on images but it can be easily modified to work on any other type of data. We are going to present a side-by-side comparison, in terms of compression rate, of our technique with other widely used image compression methods. We will show that the compression ratio achieved by this technique tanks among the best in the literature whilst the actual algorithm remains simple and easily extensible. Finally the case will be made for the ability of our method to intrinsically support and enhance methods used for cryptography, steganography and watermarking.
Real time database compression optimization using iterative length compressio...csandit
The document summarizes a research paper on optimizing real-time database compression using an Iterative Length Compression (ILC) algorithm. The proposed ILC algorithm aims to compress real-time databases more effectively to reduce storage requirements, costs, and increase backup speed compared to conventional compression methods. Experimental results on a 1GB sample database show the ILC approach achieves a compression ratio of 2.76 and 64% space savings, outperforming other compression algorithms in terms of compression ratio, space savings, and processing time for compression and decompression.
REAL TIME DATABASE COMPRESSION OPTIMIZATION USING ITERATIVE LENGTH COMPRESSIO...cscpconf
The document summarizes a research paper on optimizing real-time database compression using an Iterative Length Compression (ILC) algorithm. The proposed ILC algorithm aims to compress real-time databases more effectively to reduce storage requirements, costs, and increase backup speed compared to conventional compression methods. Experimental results on a 1GB sample database show the ILC approach achieves a compression ratio of 2.76 and 64% space savings, outperforming other compression algorithms in terms of compression ratio, space savings, and processing time for compression and decompression.
Design of Image Compression Algorithm using MATLABIJEEE
This document summarizes an image compression algorithm designed using Matlab. It discusses various image compression techniques including lossy and lossless compression methods. Lossy compression methods like JPEG are suitable for natural images while lossless methods are preferred for medical or archival images. The document also describes the image compression process involving encoding, quantization, decoding and calculation of compression ratios and quality metrics like PSNR and SNR. Specific compression algorithms discussed include Block Truncation Coding and techniques that exploit coding, inter-pixel and psychovisual redundancies like DCT used in JPEG.
The document discusses various data compression techniques including run-length coding, quantization, statistical coding, dictionary-based coding, transform-based coding, and motion prediction. It provides examples and explanations of how each technique works to reduce the size of encoded data. The performance of compression algorithms can be measured by the compression ratio, compression factor, or percentage of data saved by compression.
EVALUATE DATABASE COMPRESSION PERFORMANCE AND PARALLEL BACKUPijdms
The document describes evaluating database compression performance and parallel backup. It proposes efficient algorithms to compress real-time databases more effectively and improve the speed of backup and restore operations. The algorithms compress databases using an Iterative Length Compression technique and store the compressed databases in multiple devices in parallel using a Parallel Multithreaded Pipeline approach. Experimental results show the proposed techniques achieve higher compression ratios and space savings compared to other compression algorithms, and reduce backup time by storing compressed databases in parallel across multiple devices.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
This document provides an overview of image compression techniques. It discusses how image compression works to reduce the number of bits needed to represent image data. The main goals of image compression are to reduce irrelevant and redundant image information to produce smaller and more efficient file sizes for storage and transmission. The document outlines different compression methods including lossless compression, which compresses data without any loss, and lossy compression, which allows for some loss of information in exchange for higher compression ratios. Specific techniques like run length encoding are also explained.
An image in its original form contains large amount of redundant data which consumes huge
amount of memory and can also create storage and transmission problem. The rapid growth in the field of
multimedia and digital images also needs more storage and more bandwidth while data transmission. By
reducing redundant bits within the image data the size of image can also be reduced without affecting
essential data. In this paper we are representing existing lossless image compression techniques. The
image quality will also be discussed on the basis of certain performance parameters such as compression
ratio, peak signal to noise ratio, root mean square.
Comparision Of Various Lossless Image Compression TechniquesIJERA Editor
Today images are considered as the major information tanks in the world. They can convey a lot more information to the receptor then a few pages of written information. Due to this very reason image processing has become a field of research today. The processing are basically are of two types; lossy and lossless. Since the information is power, so having it complete and discrete is of great importance today. Hence in such cases lossless techniques are the best options. This paper deals with the comparison of different lossless image compression techniques available today.
On Text Realization Image SteganographyCSCJournals
In this paper the steganography strategy is going to be implemented but in a different way from a different scope since the important data will neither be hidden in an image nor transferred through the communication channel inside an image, but on the contrary, a well known image will be used that exists on both sides of the channel and a text message contains important data will be transmitted. With the suitable operations, we can re-mix and re-make the source image. MATLAB7 is the program where the algorithm implemented on it, where the algorithm shows high ability for achieving the task to different type and size of images. Perfect reconstruction was achieved on the receiving side. But the most interesting is that the algorithm that deals with secured image transmission transmits no images at all
This document provides an overview of image compression. It discusses what image compression is, why it is needed, common terminology used, entropy, compression system models, and algorithms for image compression including lossless and lossy techniques. Lossless algorithms compress data without any loss of information while lossy algorithms reduce file size by losing some information and quality. Common lossless techniques mentioned are run length encoding and Huffman coding while lossy methods aim to form a close perceptual approximation of the original image.
SiDe is a cost-efficient cloud storage mechanism that uses data deduplication and compression to minimize storage usage while maintaining reliability. It uses chunk-level deduplication to identify and store only unique chunks of files. For files stored short-term, only one replica is kept, while files stored long-term have one replica and one compressed copy. Simulation results show SiDe reduces storage by 81-84% compared to traditional 3-replica strategies, significantly lowering cloud storage costs.
The document discusses structures for data compression. It begins by introducing general compression concepts like lossless versus lossy compression. It then distinguishes between vector and raster data, describing how each is structured and stored. For vectors, it discusses storing points and lines more efficiently. For rasters, it explains how resolution affects file size and covers storage methods like tiles. Overall, the document provides an overview of data compression techniques for different data types.
Data Mining Un-Compressed Images from cloud with Clustering Compression techn...ijaia
This document summarizes a research paper on compressing uncompressed images from the cloud using k-means clustering and Lempel-Ziv-Welch (LZW) compression. It begins by introducing cloud computing and k-means clustering. It then describes using k-means to group uncompressed images and compressing the images using LZW coding to reduce file sizes while maintaining image quality. The document discusses advantages of LZW compression like achieving compression ratios around 5:1. It provides examples of applying k-means clustering and LZW compression to simplify image compression.
This document provides an overview of lossless data compression techniques. It discusses Huffman coding, Shannon-Fano coding, and Run Length Encoding as common lossless compression algorithms. Huffman coding assigns variable length binary codes to symbols based on their frequency, with more common symbols getting shorter codes. Shannon-Fano coding similarly generates a binary tree to assign codes but aims for a roughly equal probability between left and right subtrees. Run Length Encoding replaces repeated sequences with the length of the run and the symbol. The document contrasts lossless techniques that preserve all data with lossy techniques used for media that can tolerate some loss of information.
the compression of images is an important step before we start the processing of larger images or videos. The compression of images is carried out by an encoder and output a compressed form of an image. In the processes of compression, the mathematical transforms play a vital role.
This document discusses digital image processing and image compression. It covers 5 units: digital image fundamentals, image transforms, image enhancement, image filtering and restoration, and image compression. Image compression aims to reduce the size of image data and is important for applications like facsimile transmission and CD-ROM storage. There are two types of compression - lossless, where the original and reconstructed data are identical, and lossy, which allows some loss for higher compression ratios. Factors to consider for compression method selection include whether lossless or lossy is needed, coding efficiency, complexity tradeoffs, and the application.
The main aim of image compression is to represent the image with minimum number of bits and thus reduce the size of the image. This paper presents a Symbols Frequency based Image Coding (SFIC) technique for image compression. This method utilizes the frequency of occurrence of pixels in an image. A frequency factor, y is used to merge y pixel values that are in the same range. In this approach, the pixel values of the image that are within the frequency factor, y range are clubbed to the least pixel value in the set. As a result, there is omission of larger pixel values and hence the total size of the image reduces and thus results in higher compression ratio. It is noticed that the selection of the frequency factor, y has a great influence on the performance of the proposed scheme. However, higher PSNR values are obtained since the omitted pixels are mapped to pixels in the similar range. The proposed approach is analyzed with quantization and without quantization. The results are analyzed. This proposed new compression model is compared with Quadtree-segmented AMBTC with Bit Map Omission. From the experimental analysis it is observed that the proposed SFIC image compression scheme with both lossless and lossy techniques outperforms AMBTC-QTBO. Hence, the proposed new compression model is a better choice for lossless and lossy compression applications.
Similar to Affable Compression through Lossless Column-Oriented Huffman Coding Technique (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Affable Compression through Lossless Column-Oriented Huffman Coding Technique
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 11, Issue 6 (May. - Jun. 2013), PP 89-96
www.iosrjournals.org
www.iosrjournals.org 89 | Page
Affable Compression through Lossless Column-Oriented
Huffman Coding Technique
Punam Bajaj1
, Simranjit Kaur Dhindsa2
Computer Science Engineering Department, Chandigarh Engineering Collage, Landran, Mohali, Punjab
Abstract: Compression is a technique used by many DBMSs to increase performance. Compression improves
performance by reducing the size of data on disk, decreasing seek times, increasing the data transfer rate and
increasing buffer pool hit rate [1]. Column-Oriented Data works more naturally with compression because
compression schemes capture the correlation between values; therefore highly correlated data can be
compressed more efficiently than uncorrelated data. The correlation between values of the same attribute is
typically greater than the correlation between values of different attributes. Since a column is a sequence of
values from a single attribute, it is usually more compressible than a row [4].
In this paper we proposed the Lossless method of Column-Oriented Data-Image Compression and
Decompression using a simple coding technique called Huffman Coding. This technique is simple in
implementation and utilizes less memory [2]. A software algorithm has been developed and implemented to
compress and decompress the created Column-oriented database image using Huffman coding techniques in a
MATLAB platform.
Keywords- Compression, Column-Oriented Data-Image Compression and Decompression, Huffman coding.
I. Introduction:
Column-oriented DBMS‟s are currently under development. Column oriented DBMS‟s differ from
Row-Oriented DBMS‟s in the layout of data on disk [4]. In Column Oriented each value of an attribute
(column) is stored contiguously on disk; in a row store the values of each attribute in a tuple are stored
contiguously. Compression is a technique used by many DBMSs to increase performance. Compression
improves performance by reducing the size of data on disk, decreasing seek times, increasing the data transfer
rate and increasing buffer pool hit rate [1]. Intuitively, data stored in columns is more compressible than data
stored in rows. Column-oriented Compression algorithms perform better on data with low information entropy
(high data value locality) [3]. Eg. Imagine a database table containing information about customers (name,
phone number, e-mail address, e-mail address, etc.). Storing data in columns allows all of the names to be stored
together, all of the phone numbers together, etc. Certainly phone numbers will be more similar to each other
than surrounding text fields like e-mail addresses or names [4]. Further, if the data is sorted by one of the
columns, that column will be super-compressible. Column data is of uniform type; therefore, there are some
opportunities for storage size optimizations available in column-oriented data that are not available in row-
oriented data. This has advantages for data warehouses and library catalogues where aggregates are computed
over large numbers of similar data items [5].
Therefore, Column-Oriented Compression are better than traditional Row-oriented Compression as
applications require higher storage and easier availability of data, the demands are satisfied by better and faster
techniques [7].
II. Column-Oriented Compression
Compression is possible for data that are redundant or repeated in a given test set. Compression is a
technique used by many DBMSs to increase performance. Compression improves performance by reducing the
size of data on disk, decreasing seek times, increasing the data transfer rate and increasing buffer pool hit rate
[1]. Intuitively, data stored in columns is more compressible than data stored in rows.
Compression is usually of three types:
• Data Compression
• Image Compression
• Graphical Compression
But in our paper, we are performing Data Compression by embedding that data into Images i.e. by using
Column-Oriented Image Compression.
Column data is of uniform type; therefore, there are some opportunities for storage size optimizations available
in Column-oriented data that are not available in Row-oriented data. Compression is useful because it helps
reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth.
2. Affable Compression through Lossless Column-Oriented Huffman Coding Technique
www.iosrjournals.org 90 | Page
Infobright is an example of an open source Column-Oriented DBMS built for high-speed reporting and
analytical queries, especially against large volumes of data. Data that required 450GB of storage using SQL
Server required only 10GB with Infobright, due to Infobright‟s massive compression and the elimination of all
indexes. Using Infobright, overall compression ratio seen in the field is 10:1. Some customers have seen results
of 40:1 and higher. Eg.1TB of raw data compressed 10 to 1 would only require 100 GB of disk capacity [5].
Customer’s Test Alternative Infobright
Analytic Queries 2+ hours with MySQL <10 seconds
1 Month Report (15MM Events) 43 min with SQL Server 23 seconds
Oracle Query Set 10 seconds- 15 minutes 0.43-22 seconds
Table 1 Performance Output Difference
Therefore, we can conclude that Column-Oriented Data Compression performs better than traditional Row-
oriented Compression as applications require higher storage and easier availability of data, the demands are
satisfied by better and faster techniques [7].
III. Image Compression
A digital image obtained by sampling and quantizing a continuous tone picture requires an enormous
storage. For instance, a 24 bit color image with 512x512 pixels will occupy 768 Kbyte storage on a disk, and a
picture twice of this size will not fit in a single floppy disk. To transmit such an image over a 28.8 Kbps modem
would take almost 4 minutes. The purpose for image compression is to reduce the amount of data required for
representing sampled digital images and therefore reduce the cost for storage and transmission. Image
compression plays a key role in many important applications, including image database, image communications,
and remote sensing.
The image(s) to be compressed are gray scale with pixel values between 0 to 255. There are different
techniques for compressing images [6]. They are broadly classified into two classes called lossless and lossy
compression techniques. As the name suggests in lossless compression techniques, no information regarding the
image is lost. In other words, the reconstructed image from the compressed image is identical to the original
image in every sense. Whereas in lossy compression, some image information is lost, i.e. the reconstructed
image from the compressed image is similar to the original image but not identical to it. In this work we will use
a lossless compression and decompression through a technique called Huffman coding (i.e. Huffman encoding
and decoding) [6].
It‟s well known that the Huffman‟s algorithm is generating minimum redundancy codes compared to
other algorithms. The Huffman coding has effectively used in text, image, video compression, and conferencing
system such as, JPEG, MPEG-2, MPEG-4, and H.263etc.. The Huffman coding technique collects unique
symbols from the source image and calculates its probability value for each symbol and sorts the symbols based
on its probability value. Further, from the lowest probability value symbol to the highest probability value
symbol, two symbols combined at a time to form a binary tree. Moreover, allocates zero to the left node and one
to the right node starting from the root of the tree. To obtain Huffman code for a particular symbol, all zero and
one collected from the root to that particular node in the same order [8].
IV. Need For Compression
Research indicates that the size of the largest data warehouses doubles every three years. According to
Wintercorp‟s 2005 TopTen Program Summary, during the five year period between 1998 and 2003, the size of
the largest data warehouse grew at an exponential rate, from 5TB to 30 TB. But in four year period between
2001 and 2005, that exponential rate increased, with the largest data warehouse growing from 10 TB to 100 TB
[9].
To store these data including images, audio files, videos etc, and make them available over network
(e.g. the internet), compression techniques are needed. Image compression addresses the problem of reducing
the amount of data required to represent digital image. The underlying basis of the reduction process is the
removal of redundant data. According to mathematical point of view, this amounts to transforming a two-
dimensional pixel array into a statistically uncorrelated data set. The transformation is applied prior to storage or
transmission of the image. At receiver, the compressed image is decompressed to reconstruct the original image
or an approximation to it. The example below clearly shows the importance of compression. An image, 1024
pixel×1024 pixel×24 bit, without compression, would require 3 MB of storage and 7 minutes for transmission,
utilizing a high speed, 64Kbits/s, ISDN line. If the image is compressed at a 10:1 compression ratio, the storage
requirement is reduced to 300 KB and the transmission time drop to less than 6 seconds.
3. Affable Compression through Lossless Column-Oriented Huffman Coding Technique
www.iosrjournals.org 91 | Page
4.1 Principle behind Compression
A common characteristic of most images is that the neighboring pixels are correlated and therefore
contain redundant information. The foremost task then is to find less correlated representation of the image.
Two fundamental components of compression are redundancy and irrelevancy reduction.
a) Redundancies reduction aims at removing duplication from the signal source (image/video).
b) Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the
Human Visual System.
In an image, which consists of a sequence of images, there are three types of redundancies in order to compress
file size. They are:
a) Coding redundancy: Fewer bits to represent frequently occurring symbols.
b) Inter-pixel redundancy: Neighboring pixels have almost same value.
c) Psycho visual redundancy: Human visual system cannot simultaneously distinguish all colors.
V. Various Types Of Redundancy
In digital image compression, three basic data redundancies can be identified and exploited:
a. Coding redundancy
b. Inter pixel redundancy
c. Psycho visual redundancy
Data compression is achieved when one or more of these redundancies are reduced or eliminated.
5.1 Coding Redundancy
A gray level image having n pixels is considered. Let us assume, that a discrete random variable rk in
the interval (0,1) represents the grey levels of an image and that each rk occurs with probability Pr(rk).
Probability can be estimated from the histogram of an image using
Pr(rk) = hk/n for k = 0,1……L-1
Where L is the number of grey levels and hk is the frequency of occurrence of grey level k (the number of times
that the kth grey level appears in the image) and n is the total number of the pixels in the image. If the number of
the bits used to represent each value of rk is l(rk), the average number of bits required to represent each pixel is :
Hence the number of bits required to represent the whole image is n x Lavg. Maximal compression ratio
is achieved when Lavg is minimized. Coding the gray levels in such a way that the Lavg is not minimized results
in an image containing coding redundancy. Generally coding redundancy is presented when the codes (whose
lengths are represented here by l(rk) function) assigned to a gray levels don't take full advantage of gray level‟s
probability (Pr(rk)function). Therefore it almost always presents when an image's gray levels are represented
with a straight or natural binary code. A natural binary coding of their gray levels assigns the same number of
bits to both the most and least probable values, thus failing to minimize equation and resulting in coding
redundancy.
Example of Coding Redundancy: An 8-level image has the gray level distribution shown in table I. If a natural
3-bit binary code is used to represent 8 possible gray levels, Lavg is 3- bits, because l rk= 3 bits for all rk . If code
2 in table I is used, however the average number of bits required to code the image is reduced to:
Lavg = (0.19) + 2(0.25) +2(0.21) + 3(0.16) + 4(0.08) + 5(0.06) + 6(0.03) + 6(0.02) =2.7 bits.
From equation of compression ratio (n2/n1) the resulting compression ratio CR is 3/2.7 = 1.11. Thus
approximately 10% of the data resulting from the use of code 1 is redundant. The exact level of redundancy can
be determined from equation RD = 1 – 1/1.11 =0.099.
Table I: Example of Variable Length Coding
4. Affable Compression through Lossless Column-Oriented Huffman Coding Technique
www.iosrjournals.org 92 | Page
It is clear that 9.9% data in first data set is redundant which is to be removed to achieve compression.
5.1.1 Reduction of Coding Redundancy
To reduce this redundancy from an image we go for the Huffman technique where we are assigning
fewer bits to the more probable gray levels than to the less probable ones achieves data compression. This
process commonly is referred to as variable length coding. There are several optimal and near optimal
techniques for constructs such a code i.e. Huffman coding, Arithmetic coding etc.
5.2 Inter pixel Redundancy
Another important form of data redundancy is inter-pixel redundancy, which is directly related to the
inter-pixel correlations within an image. Because the value of any given pixel can be reasonable predicted from
the value of its neighbors, the information carried by individual pixels is relatively small. Much of the visual
contribution of a single pixel to an image is redundant; it could have been guessed on the basis of its neighbor‟s
values. A variety of names, including spatial redundancy, geometric redundancy, and inter frame redundancies
have been coined to refer to these inter-pixel dependencies. In order to reduce the inter-pixel redundancies in an
image, the 2-D pixel array normally used for human viewing and interpretation must be transformed into a more
efficient but usually non-visual format. For example, the differences between adjacent pixels can be used to
represent an image. Transformations of this type are referred to as mappings. They are called reversible if the
original image elements can be reconstructed from the transformed data set.
5.2.1 Reduction of Inter-pixel Redundancy
To reduce the inter-pixel redundancy we use various techniques such as:
1. Run length coding.
2. Delta compression.
3. Predictive coding.
5.3 Psycho visual Redundancy
Human perception of the information in an image normally does not involve quantitative analysis of
every pixel or luminance value in the image. In general, an observer searches for distinguishing features such as
edges or textural regions and mentally combines them into recognizable groupings. The brain then correlates
these groupings with prior knowledge in order to complete the image interpretation process. Thus eye does not
respond with equal sensitivity to all visual Information. Certain information simply has less relative importance
than other information in normal visual processing. This information is said to be psycho visually redundant. It
can be eliminated without significantly impairing the quality of image perception. Psycho visual redundancy is
fundamentally different from the coding Redundancy and inter-pixel redundancy. Unlike coding redundancy
and inter-pixel redundancy, psycho-visual redundancy is associated with real or quantifiable visual information.
Its elimination is possible only because the information itself is not essential for normal visual processing. Since
the elimination of psycho-visual redundant data results in a loss of quantitative information. Thus it is an
irreversible process.
5.3.1 Reduction of Psycho visual Redundancy
To reduce psycho visual redundancy we use Quantizer. Since the elimination of psycho-visually
redundant data results in a loss of quantitative information. It is commonly referred to as quantization. As it is an
irreversible operation quantization results in lossy data compression [].
5. Affable Compression through Lossless Column-Oriented Huffman Coding Technique
www.iosrjournals.org 93 | Page
VI. Implementation Of Lossless Compression And Decompression Techniques
6.1 Huffman coding
Huffman code procedure is based on the two observations.
a. More frequently occurred symbols will have shorter code words than symbol that occur less frequently.
b. The two symbols that occur least frequently will have the same length. The Huffman code is designed by
merging the lowest probable symbols and this process is repeated until only two probabilities of two compound
symbols are left and thus a code tree is generated and Huffman codes are obtained from labeling of the code
tree. This is illustrated with an example shown in table II:
Table III: Huffman Code Assignment Procedure
At the far left of the table I the symbols are listed and corresponding symbol probabilities are arranged in
decreasing order and now the least t probabilities are merged as here 0.06 and 0.04 are merged, this gives a
compound symbol with probability 0.1, and the compound symbol probability is placed in source reduction
column1 such that again the probabilities should be in decreasing order. So, this process is continued until only
two probabilities are left at the far right shown in the above table as 0.6 and 0.4. The second step in Huffman‟s
procedure is to code each reduced source, starting with the smallest source and working back to its original
source [3]. The minimal length binary code for a two-symbol source, of course, is the symbols 0 and 1. As
shown in table III these symbols are assigned to the two symbols on the right (the assignment is arbitrary;
reversing the order of the 0 and would work just and well). As the reduced source symbol with probabilities 0.6
was generated by combining two symbols in the reduced source to its left, the 0 used to code it is now assigned
to both of these symbols, and a 0and 1 are arbitrary appended to each to distinguish them from each other. This
operation is then repeated for each reduced source until the original course is reached. The final code appears at
the far-left in table 1.8. The average length of the code is given by the average of the product of probability of
the symbol and number of bits used to encode it. This is calculated below:
Lavg = (0.4)(1) +(0.3)(2) + (0.1)(3) + (0.1)(4) + (0.06)(5) + (0.04)(5) = 2.2bits/ symbol and the entropy of the
source is 2.14bits/symbol, the resulting Huffman code efficiency is 2.14/2.2 = 0.973.
Huffman‟s procedure creates the optimal code for a set of symbols and probabilities subject to the constraint that
the symbols be coded one at a time.
6.2 Huffman decoding
After the code has been created, coding and/or decoding is accomplished in a simple look-up table
manner. The code itself is an instantaneous uniquely decodable block code. It is called a block code, because
each source symbol is mapped into a fixed sequence of code symbols. It is instantaneous, because each
codeword in a string of code symbols can be decoded without referencing succeeding symbols. It is uniquely
decodable, because any string of code symbols can be decoded in only one way. Thus, any string of Huffman
encoded symbols can be decoded by examining the individual symbols of the string in a left to right manner. For
the binary code of table III, a left-to-right scans of the encoded string 010100111100 reveals that the first valid
code word is 01010, which is the code for symbol a3. The next valid code is 011, which corresponds to
symbola1. Valid code for the symbol a2 is 1, valid code for the symbols a6 is 00, valid code for the symbol a6 is
continuing in this manner reveals the completely decoded message a5 a2 a6 a4 a3 a1, so in this manner the
original image or data can be decompressed using Huffman decoding as explained above.
At first we have as much as the
compressor does a probability distribution. The compressor made a code table. The decompressor doesn't use
this method though. It instead keeps the whole Huffman binary tree, and of course a pointer to the root to do the
recursion process. In our implementation we'll make the tree as usual and then you'll store a pointer to last node
in the list, which is the root. Then the process can start. We'll navigate the tree by using the pointers to the
6. Affable Compression through Lossless Column-Oriented Huffman Coding Technique
www.iosrjournals.org 94 | Page
children that each node has. This process is done by a recursive function which accepts as a parameter a pointer
to the current node, and returns the symbol.
VII. Quality Measures:
7.1 Peak Signal To Noise Ratio:
The Peak Signal to Noise Ratio (PSNR) is the ratio between maximum possible power and corrupting
noise that affect representation of image. PSNR is usually expressed as decibel scale. The PSNR is commonly
used as measure of quality reconstruction of image. The signal in this case is original data and the noise is the
error introduced. High value of PSNR indicates the high quality of image.
It is defined via the Mean Square Error (MSE) and corresponding distortion matric, the Peak Signal to
Noise[10].
7.2 Mean Square Error
Mean Square Error can be estimated in one of many ways to quantify the difference between values
implied by an estimate and the true quality being certificated. MSE is a risk function corresponding to the
expected value of squared error. The MSE is the second moment of error and thus incorporates both the variance
of the estimate and its bias[10].
VIII. Development Steps of Column-Oriented Huffman Coding and Decoding Algorithm
Step1- Plot the interested Columns of column -oriented database in workplace of MATLAB.
Step2- Convert the given figure into grey level image.
Step3- Read the image on to the workspace of the MATLAB.
Step4- Call a Column-Oriented Huffman Coding Algorithm.
Step5- Following five figures are generated as results.
Figure 1: Construction of Image from Column-Oriented Database.
Figure 2: Image Encoding Steps from 1-6
Figure 3: Final Image Encoding Steps.
Figure 4: Image Decoding Steps from 1-6
Figure 5: Final Image Decoding Steps.
Step 6- Calculate values of MSE, PSNR and Elapsed Time.
IX. Results:
Fig1: Construction of Image from Column-Oriented Database. Fig2: Image Encoding Steps from 1-6
7. Affable Compression through Lossless Column-Oriented Huffman Coding Technique
www.iosrjournals.org 95 | Page
Figure 3: Final Image Encoding Steps. Figure 4: Image Decoding Steps from 1-6
Figure 5: Final Image Decoding Steps
The input image shown in Fig.1 to which the above Huffman coding algorithm is applied for the generation of
codes and then decompression algorithm (i.e. Huffman decoding) is applied to get the original image back from
the generated codes, which is shown in the Fig.3. The number of saved bits is the difference between the
number of bits required to represent the input image i.e. shown in the table II by considering each symbol can
take a maximum code length of 8 bits and the number of bits taken by the Huffman code to represent the
compressed image i.e. Saved bits = (8*(r*c)-(l1*l2))=3212, r and c represents size of the input matrix, l1 and l2
represents the size of Huffman code. The compression ratio is the ratio of number of bits required to represent
the image using Huffman code to the number of bits used to represent the input image. i.e. Compression ratio =
(l1*l2)/ (8*r*c) =0.8456, The output image is the decompressed image i.e. from the Fig.5 it is clear that the
decompressed image is approximately equal to the input image.
X. Conclusion
The experiment shows that the higher data redundancy helps to achieve more compression. The above
presented a new Column-Oriented compression and decompression technique based on Huffman coding and
decoding for scan testing to reduce test data volume, test application time.
Assessment for image quality is a traditional need. The conventional method for measuring quality of image is
MSE & PSNR. In this paper we compared the different image enhancement techniques by using their quality
parameters (MSE & PSNR). Experimental results show that
• Both PSNR and MSE are inversely proportional to each other.
• Whose PSNR is High, the Image Compression is Better.
MSE=2.2710e+004 , PSNR= 4.5687 dB and Total Elapsed Time=133.3965
Therefore, better compression ratio for the above image is obtained. Hence we conclude that Column-Oriented
Huffman coding is efficient technique for image compression and decompression. As the future work on
compression of images for storing and transmitting images can be done by other lossless methods of image
compression because as we have concluded above the result the decompressed image is almost same as that of
the input image so that indicates that there is no loss of information during transmission. So other methods of
image compression can be carried out as namely JPEG method, Entropy coding, etc.