Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
This presentation describes briefly about the image enhancement in spatial domain, basic gray level transformation, histogram processing, enhancement using arithmetic/ logical operation, basics of spatial filtering and local enhancements.
This document provides a 3 sentence summary of a lecture on image enhancement through histogram specification. The lecture discusses performing histogram equalization on an input image to match the histogram of a target image through mapping the pixel values. Any questions about histogram specification or equalization are welcome at the end.
This document discusses various data compression techniques. It begins with an introduction to compression and its goals of reducing storage space and transmission time. Then it discusses lossless techniques like Huffman coding, Lempel-Ziv coding, run-length encoding and pattern substitution. The document also briefly covers lossy compression and entropy encoding algorithms like Shannon-Fano coding and arithmetic coding. Key compression methods and their applications are summarized throughout.
Image Enhancement: Introduction to Spatial Filters, Low Pass Filter and High Pass Filters. Here Discussed Image Smoothing and Image Sharping, Gaussian Filters
This document discusses predictive coding, which achieves data compression by predicting pixel values and encoding only prediction errors. It describes lossless predictive coding, which exactly reconstructs data, and lossy predictive coding, which introduces errors. Lossy predictive coding inserts quantization after prediction error calculation, mapping errors to a limited range to control compression and distortion. Common predictive coding techniques include linear prediction of pixels from neighboring values and delta modulation.
This document discusses region-based image segmentation techniques. Region-based segmentation groups pixels into regions based on common properties. Region growing is described as starting with seed points and grouping neighboring pixels with similar properties into larger regions. The advantages are it can correctly separate regions with the same defined properties and provide good segmentation in images with clear edges. The disadvantages include being computationally expensive and sensitive to noise. Region splitting and merging techniques are also discussed as alternatives to region growing.
This document discusses image compression techniques. It begins by defining image compression as reducing the data required to represent a digital image. It then discusses why image compression is needed for storage, transmission and other applications. The document outlines different types of redundancies that can be exploited in compression, including spatial, temporal and psychovisual redundancies. It categorizes compression techniques as lossless or lossy and describes several algorithms for each type, including Huffman coding, LZW coding, DPCM, DCT and others. Key aspects like prediction, quantization, fidelity criteria and compression models are also summarized.
The document discusses noise models and methods for removing additive noise from digital images. It describes several types of noise that can affect images, such as Gaussian, impulse, uniform, Rayleigh, gamma and exponential noise. It also presents various noise filters that can be used to remove noise, including mean filters like arithmetic, geometric and harmonic filters, and order statistics filters such as median, max, min and midpoint filters. The filters aim to reduce noise while retaining image detail as much as possible.
This presentation describes briefly about the image enhancement in spatial domain, basic gray level transformation, histogram processing, enhancement using arithmetic/ logical operation, basics of spatial filtering and local enhancements.
This document provides a 3 sentence summary of a lecture on image enhancement through histogram specification. The lecture discusses performing histogram equalization on an input image to match the histogram of a target image through mapping the pixel values. Any questions about histogram specification or equalization are welcome at the end.
This document discusses various data compression techniques. It begins with an introduction to compression and its goals of reducing storage space and transmission time. Then it discusses lossless techniques like Huffman coding, Lempel-Ziv coding, run-length encoding and pattern substitution. The document also briefly covers lossy compression and entropy encoding algorithms like Shannon-Fano coding and arithmetic coding. Key compression methods and their applications are summarized throughout.
Image Enhancement: Introduction to Spatial Filters, Low Pass Filter and High Pass Filters. Here Discussed Image Smoothing and Image Sharping, Gaussian Filters
This document discusses predictive coding, which achieves data compression by predicting pixel values and encoding only prediction errors. It describes lossless predictive coding, which exactly reconstructs data, and lossy predictive coding, which introduces errors. Lossy predictive coding inserts quantization after prediction error calculation, mapping errors to a limited range to control compression and distortion. Common predictive coding techniques include linear prediction of pixels from neighboring values and delta modulation.
This document discusses region-based image segmentation techniques. Region-based segmentation groups pixels into regions based on common properties. Region growing is described as starting with seed points and grouping neighboring pixels with similar properties into larger regions. The advantages are it can correctly separate regions with the same defined properties and provide good segmentation in images with clear edges. The disadvantages include being computationally expensive and sensitive to noise. Region splitting and merging techniques are also discussed as alternatives to region growing.
This document discusses image compression techniques. It begins by defining image compression as reducing the data required to represent a digital image. It then discusses why image compression is needed for storage, transmission and other applications. The document outlines different types of redundancies that can be exploited in compression, including spatial, temporal and psychovisual redundancies. It categorizes compression techniques as lossless or lossy and describes several algorithms for each type, including Huffman coding, LZW coding, DPCM, DCT and others. Key aspects like prediction, quantization, fidelity criteria and compression models are also summarized.
The document discusses noise models and methods for removing additive noise from digital images. It describes several types of noise that can affect images, such as Gaussian, impulse, uniform, Rayleigh, gamma and exponential noise. It also presents various noise filters that can be used to remove noise, including mean filters like arithmetic, geometric and harmonic filters, and order statistics filters such as median, max, min and midpoint filters. The filters aim to reduce noise while retaining image detail as much as possible.
LZW coding is a lossless compression technique that removes spatial redundancies in images. It works by assigning variable length code words to sequences of input symbols using a dictionary. As the dictionary grows, longer matches are encoded, improving compression ratios. LZW compression is fast, simple to implement, and effective for images with repeating patterns, making it widely used in formats like GIF and TIFF [END SUMMARY]
The document provides an introduction to image encryption using AES key expansion. It discusses how traditional encryption techniques are not well-suited for encrypting large multimedia files like images due to their size and characteristics. The objective of the study is to develop an image encryption system that is computationally secure, fast enough for real-time use, and widely acceptable. It reviews related works in image encryption and discusses limitations of only using a 128-bit AES key. The document is organized into chapters covering cryptography fundamentals, image cryptosystems, AES algorithm details, an example of AES key expansion, and experimental analysis.
image compression using matlab project reportkgaurav113
The document discusses JPEG image compression and its implementation in MATLAB. It describes the steps taken to encode and decode grayscale images using the JPEG baseline standard in MATLAB. These include dividing images into 8x8 blocks, applying the discrete cosine transform, quantizing the results, and entropy encoding the data. Encoding compression ratios and processing times are compared between classic and fast DCT approaches. The project also examines how quantization coefficients affect the restored image quality.
This document discusses digital image processing concepts including:
- Image acquisition and representation, including sampling and quantization of images. CCD arrays are commonly used in digital cameras to capture images as arrays of pixels.
- A simple image formation model where the intensity of a pixel is a function of illumination and reflectance at that point. Typical ranges of illumination and reflectance are provided.
- Image interpolation techniques like nearest neighbor, bilinear, and bicubic interpolation which are used to increase or decrease the number of pixels in a digital image. Examples of applying these techniques are shown.
- Basic relationships between pixels including adjacency, paths, regions, boundaries, and distance measures like Euclidean, city block, and
The document discusses various techniques for image compression. It describes how image compression aims to reduce redundant data in images to decrease file size for storage and transmission. It discusses different types of redundancy like coding, inter-pixel, and psychovisual redundancy that compression algorithms target. Common compression techniques described include transform coding, predictive coding, Huffman coding, and Lempel-Ziv-Welch (LZW) coding. Key aspects like compression ratio, mean bit rate, objective and subjective quality metrics are also covered.
This document discusses digital image compression. It notes that compression is needed due to the huge amounts of digital data. The goals of compression are to reduce data size by removing redundant data and transforming the data prior to storage and transmission. Compression can be lossy or lossless. There are three main types of redundancy in digital images - coding, interpixel, and psychovisual - that compression aims to reduce. Channel encoding can also be used to add controlled redundancy to protect the source encoded data when transmitted over noisy channels. Common compression methods exploit these different types of redundancies.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
The document discusses the fundamental steps in digital image processing. It describes 7 key steps: (1) image acquisition, (2) image enhancement, (3) image restoration, (4) color image processing, (5) wavelets and multiresolution processing, (6) image compression, and (7) morphological processing. For each step, it provides brief explanations of the techniques and purposes involved in digital image processing.
The document provides an overview of Huffman coding, a lossless data compression algorithm. It begins with a simple example to illustrate the basic idea of assigning shorter codes to more frequent symbols. It then defines key terms like entropy and describes the Huffman coding algorithm, which constructs an optimal prefix code from the frequency of symbols in the data. The document discusses how Huffman coding can be applied to image compression by first predicting pixel values and then encoding the residuals. It notes some disadvantages of Huffman coding and describes variations like adaptive Huffman coding.
Thesis on Image compression by Manish MystManish Myst
The document discusses using neural networks for image compression. It describes how previous neural network methods divided images into blocks and achieved limited compression. The proposed method applies edge detection, thresholding, and thinning to images first to reduce their size. It then uses a single-hidden layer feedforward neural network with an adaptive number of hidden neurons based on the image's distinct gray levels. The network is trained to compress the preprocessed image block and reconstruct the original image at the receiving end. This adaptive approach aims to achieve higher compression ratios than previous neural network methods.
Run-length encoding is a data compression technique that works by eliminating redundant data. It identifies repeating characters or values and replaces them with a code consisting of the character and the number of repeats. This compressed encoded data is then transmitted. At the receiving end, the code is decoded to reconstruct the original data. It is useful for compressing any type of repeating data sequences and is commonly used in image compression by encoding runs of black or white pixels. The compression ratio achieved depends on the amount of repetition in the original uncompressed data.
This document discusses various spatial filters used for image processing, including smoothing and sharpening filters. Smoothing filters are used to reduce noise and blur images, with linear filters performing averaging and nonlinear filters using order statistics like the median. Sharpening filters aim to enhance edges and details by using derivatives, with first derivatives calculated via gradient magnitude and second derivatives using the Laplacian operator. Specific filters covered include averaging, median, Sobel, and unsharp masking.
The document discusses steganography, which is the art of hiding information within other files like images. It explains how early Greeks used steganography by engraving messages in wood and covering it with wax. Modern steganography uses computers to hide information by changing the least significant bit of image file bytes, which is imperceptible to the human eye. The document also provides an overview of a proposed steganography application that allows users to hide text within an image file and later extract the hidden text.
This document discusses image degradation and restoration. It describes how images can become degraded through imperfect imaging systems, transmission channels, atmospheric conditions, and motion. It then discusses several methods for restoring degraded images, including inverse filtering, Wiener filtering, and Kalman filtering. Specific techniques are presented for restoring images corrupted with impulse noise or blurring, including using differences from the median and convolution models. The document concludes by describing simulations of image restoration techniques in MATLAB.
Intensity Transformation and Spatial filteringShajun Nisha
Dr. S. Shajun Nisha discusses intensity transformation and spatial filtering techniques in image processing. Intensity transformation functions modify pixel intensities based on a transformation function. Spatial filtering involves applying an operator over a neighborhood of pixels. Common intensity transformations include contrast stretching and logarithmic transforms. Histogram equalization is also described to improve contrast. Spatial filters include linear filters implemented using imfilter and non-linear filters like median filtering with ordfilt2 and medfilt2. Examples demonstrate applying these techniques to enhance images.
The document discusses image restoration techniques. It describes how images can become degraded through phenomena like motion, improper camera focusing, and noise. The goal of image restoration is to recover the original high quality image from its degraded version using knowledge about the degradation process and types of noise. Common noise models include Gaussian, Rayleigh, Erlang, exponential, and impulse noise. Filtering techniques like mean, order statistics, and adaptive filters can be used for restoration by smoothing the image while preserving edges. The adaptive filters change based on local image statistics to better reduce noise with less blurring than regular filters.
This document discusses data compression techniques for digital images. It explains that compression reduces the amount of data needed to represent an image by removing redundant information. The compression process involves an encoder that transforms the input image, and a decoder that reconstructs the output image. The encoder uses three main stages: a mapper to reduce interpixel redundancy, a quantizer to reduce accuracy and psychovisual redundancy, and a symbol encoder to assign variable-length codes to the quantized values. The decoder performs the inverse operations of the encoder and mapper to reconstruct the original image, but does not perform the inverse of quantization which is a lossy process.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
1. A mask is a small matrix used in image filtering with weighted values that is placed over an image.
2. Convolution involves multiplying the image pixel values with the mask weights and summing to produce an output value, while cross-correlation measures similarity between images without flipping the mask.
3. Common filters include mean, Gaussian, and median filters for smoothing/noise reduction, and sharpening filters that emphasize fine details by computing intensity differences locally.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
This document describes the development of an image compression application for Android. It aims to compress images without losing quality by using JPEG compression. The application allows users to select an image on their phone, compress it to a smaller size, and save the compressed image without using much memory or processing power. It works by scaling images, applying JPEG compression techniques like color transformation, discrete cosine transform, quantization, and Huffman encoding to reduce file size. This makes it suitable for devices with limited storage and helps users share images without using significant data. The application provides a simple way to compress and a custom way to compress with more compression by adjusting resolution and quality settings.
LZW coding is a lossless compression technique that removes spatial redundancies in images. It works by assigning variable length code words to sequences of input symbols using a dictionary. As the dictionary grows, longer matches are encoded, improving compression ratios. LZW compression is fast, simple to implement, and effective for images with repeating patterns, making it widely used in formats like GIF and TIFF [END SUMMARY]
The document provides an introduction to image encryption using AES key expansion. It discusses how traditional encryption techniques are not well-suited for encrypting large multimedia files like images due to their size and characteristics. The objective of the study is to develop an image encryption system that is computationally secure, fast enough for real-time use, and widely acceptable. It reviews related works in image encryption and discusses limitations of only using a 128-bit AES key. The document is organized into chapters covering cryptography fundamentals, image cryptosystems, AES algorithm details, an example of AES key expansion, and experimental analysis.
image compression using matlab project reportkgaurav113
The document discusses JPEG image compression and its implementation in MATLAB. It describes the steps taken to encode and decode grayscale images using the JPEG baseline standard in MATLAB. These include dividing images into 8x8 blocks, applying the discrete cosine transform, quantizing the results, and entropy encoding the data. Encoding compression ratios and processing times are compared between classic and fast DCT approaches. The project also examines how quantization coefficients affect the restored image quality.
This document discusses digital image processing concepts including:
- Image acquisition and representation, including sampling and quantization of images. CCD arrays are commonly used in digital cameras to capture images as arrays of pixels.
- A simple image formation model where the intensity of a pixel is a function of illumination and reflectance at that point. Typical ranges of illumination and reflectance are provided.
- Image interpolation techniques like nearest neighbor, bilinear, and bicubic interpolation which are used to increase or decrease the number of pixels in a digital image. Examples of applying these techniques are shown.
- Basic relationships between pixels including adjacency, paths, regions, boundaries, and distance measures like Euclidean, city block, and
The document discusses various techniques for image compression. It describes how image compression aims to reduce redundant data in images to decrease file size for storage and transmission. It discusses different types of redundancy like coding, inter-pixel, and psychovisual redundancy that compression algorithms target. Common compression techniques described include transform coding, predictive coding, Huffman coding, and Lempel-Ziv-Welch (LZW) coding. Key aspects like compression ratio, mean bit rate, objective and subjective quality metrics are also covered.
This document discusses digital image compression. It notes that compression is needed due to the huge amounts of digital data. The goals of compression are to reduce data size by removing redundant data and transforming the data prior to storage and transmission. Compression can be lossy or lossless. There are three main types of redundancy in digital images - coding, interpixel, and psychovisual - that compression aims to reduce. Channel encoding can also be used to add controlled redundancy to protect the source encoded data when transmitted over noisy channels. Common compression methods exploit these different types of redundancies.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
The document discusses the fundamental steps in digital image processing. It describes 7 key steps: (1) image acquisition, (2) image enhancement, (3) image restoration, (4) color image processing, (5) wavelets and multiresolution processing, (6) image compression, and (7) morphological processing. For each step, it provides brief explanations of the techniques and purposes involved in digital image processing.
The document provides an overview of Huffman coding, a lossless data compression algorithm. It begins with a simple example to illustrate the basic idea of assigning shorter codes to more frequent symbols. It then defines key terms like entropy and describes the Huffman coding algorithm, which constructs an optimal prefix code from the frequency of symbols in the data. The document discusses how Huffman coding can be applied to image compression by first predicting pixel values and then encoding the residuals. It notes some disadvantages of Huffman coding and describes variations like adaptive Huffman coding.
Thesis on Image compression by Manish MystManish Myst
The document discusses using neural networks for image compression. It describes how previous neural network methods divided images into blocks and achieved limited compression. The proposed method applies edge detection, thresholding, and thinning to images first to reduce their size. It then uses a single-hidden layer feedforward neural network with an adaptive number of hidden neurons based on the image's distinct gray levels. The network is trained to compress the preprocessed image block and reconstruct the original image at the receiving end. This adaptive approach aims to achieve higher compression ratios than previous neural network methods.
Run-length encoding is a data compression technique that works by eliminating redundant data. It identifies repeating characters or values and replaces them with a code consisting of the character and the number of repeats. This compressed encoded data is then transmitted. At the receiving end, the code is decoded to reconstruct the original data. It is useful for compressing any type of repeating data sequences and is commonly used in image compression by encoding runs of black or white pixels. The compression ratio achieved depends on the amount of repetition in the original uncompressed data.
This document discusses various spatial filters used for image processing, including smoothing and sharpening filters. Smoothing filters are used to reduce noise and blur images, with linear filters performing averaging and nonlinear filters using order statistics like the median. Sharpening filters aim to enhance edges and details by using derivatives, with first derivatives calculated via gradient magnitude and second derivatives using the Laplacian operator. Specific filters covered include averaging, median, Sobel, and unsharp masking.
The document discusses steganography, which is the art of hiding information within other files like images. It explains how early Greeks used steganography by engraving messages in wood and covering it with wax. Modern steganography uses computers to hide information by changing the least significant bit of image file bytes, which is imperceptible to the human eye. The document also provides an overview of a proposed steganography application that allows users to hide text within an image file and later extract the hidden text.
This document discusses image degradation and restoration. It describes how images can become degraded through imperfect imaging systems, transmission channels, atmospheric conditions, and motion. It then discusses several methods for restoring degraded images, including inverse filtering, Wiener filtering, and Kalman filtering. Specific techniques are presented for restoring images corrupted with impulse noise or blurring, including using differences from the median and convolution models. The document concludes by describing simulations of image restoration techniques in MATLAB.
Intensity Transformation and Spatial filteringShajun Nisha
Dr. S. Shajun Nisha discusses intensity transformation and spatial filtering techniques in image processing. Intensity transformation functions modify pixel intensities based on a transformation function. Spatial filtering involves applying an operator over a neighborhood of pixels. Common intensity transformations include contrast stretching and logarithmic transforms. Histogram equalization is also described to improve contrast. Spatial filters include linear filters implemented using imfilter and non-linear filters like median filtering with ordfilt2 and medfilt2. Examples demonstrate applying these techniques to enhance images.
The document discusses image restoration techniques. It describes how images can become degraded through phenomena like motion, improper camera focusing, and noise. The goal of image restoration is to recover the original high quality image from its degraded version using knowledge about the degradation process and types of noise. Common noise models include Gaussian, Rayleigh, Erlang, exponential, and impulse noise. Filtering techniques like mean, order statistics, and adaptive filters can be used for restoration by smoothing the image while preserving edges. The adaptive filters change based on local image statistics to better reduce noise with less blurring than regular filters.
This document discusses data compression techniques for digital images. It explains that compression reduces the amount of data needed to represent an image by removing redundant information. The compression process involves an encoder that transforms the input image, and a decoder that reconstructs the output image. The encoder uses three main stages: a mapper to reduce interpixel redundancy, a quantizer to reduce accuracy and psychovisual redundancy, and a symbol encoder to assign variable-length codes to the quantized values. The decoder performs the inverse operations of the encoder and mapper to reconstruct the original image, but does not perform the inverse of quantization which is a lossy process.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
1. A mask is a small matrix used in image filtering with weighted values that is placed over an image.
2. Convolution involves multiplying the image pixel values with the mask weights and summing to produce an output value, while cross-correlation measures similarity between images without flipping the mask.
3. Common filters include mean, Gaussian, and median filters for smoothing/noise reduction, and sharpening filters that emphasize fine details by computing intensity differences locally.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
This document describes the development of an image compression application for Android. It aims to compress images without losing quality by using JPEG compression. The application allows users to select an image on their phone, compress it to a smaller size, and save the compressed image without using much memory or processing power. It works by scaling images, applying JPEG compression techniques like color transformation, discrete cosine transform, quantization, and Huffman encoding to reduce file size. This makes it suitable for devices with limited storage and helps users share images without using significant data. The application provides a simple way to compress and a custom way to compress with more compression by adjusting resolution and quality settings.
This document describes the development of an image compression application for Android. It aims to compress images without losing quality by using JPEG compression. The application allows users to select an image on their phone, compress it to a smaller file size, and save the compressed image without using much memory or processing power. It works by scaling images, applying JPEG compression which includes color transformation, discrete cosine transform, quantization, and Huffman encoding to reduce file size. This makes it suitable for devices with limited storage and helps users share images without using significant data. The application provides a simple way to compress and a custom way to compress with more compression by adjusting resolution and file size.
An image in its original form contains large amount of redundant data which consumes huge
amount of memory and can also create storage and transmission problem. The rapid growth in the field of
multimedia and digital images also needs more storage and more bandwidth while data transmission. By
reducing redundant bits within the image data the size of image can also be reduced without affecting
essential data. In this paper we are representing existing lossless image compression techniques. The
image quality will also be discussed on the basis of certain performance parameters such as compression
ratio, peak signal to noise ratio, root mean square.
This document provides an overview of image compression. It discusses what image compression is, why it is needed, common terminology used, entropy, compression system models, and algorithms for image compression including lossless and lossy techniques. Lossless algorithms compress data without any loss of information while lossy algorithms reduce file size by losing some information and quality. Common lossless techniques mentioned are run length encoding and Huffman coding while lossy methods aim to form a close perceptual approximation of the original image.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSIONSATYENDRAKUMAR279
Image compression methods are used to minimize the data size of graphic file without degrading the qualitative and quantitative image quality. Image compression means reducing the volume of data needed to represent an image. Image compression techniques are divided in two types: 1) Transforms domain (DCT, JPEG, FFT, and Wavelet based) and 2) Non transforms (PCM, DPCM).
In this report we present the DCT transform to compress the image with high quality.
In achieving those results, the wavelet scheme tries to identify the wavelet filter that would result in the smallest number of non- zero coefficients (thus giving a high compression ratio), uses an HVS based processing module to adaptively quantize wavelet coefficients, and efficiently codes the processed wavelet transform
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
This document discusses digital image processing and image compression. It covers 5 units: digital image fundamentals, image transforms, image enhancement, image filtering and restoration, and image compression. Image compression aims to reduce the size of image data and is important for applications like facsimile transmission and CD-ROM storage. There are two types of compression - lossless, where the original and reconstructed data are identical, and lossy, which allows some loss for higher compression ratios. Factors to consider for compression method selection include whether lossless or lossy is needed, coding efficiency, complexity tradeoffs, and the application.
Mobile Device Oriented Image Scaling for Reducing Memory Consumption in stor...Editor IJCATR
In Android operating system, efficient memory consumption is an important feature for better performance. It is very
important to efficiently use and manage the internal and external memory space present inside the mobile operating system. Various
techniques has been used and implemented to reduce the memory usage in android. One of the technique for better utilization of
memory is image compression using scaling technique in android. As per the previous work, image compression techniques are widely
used in the field medical science for transparency of reports and effective treatment practices. Other studies didn`t show good results
and facing problems such as data loses, poor visibility, large image size and incompatibility issues with the devices and the size of
application developed for image compression is too large. Here in this proposed system, we are developing an android mobile
application where device oriented image scaling and user oriented image scaling for different screen sizes can be done to reduce the
memory consumption of android mobile devices without data loss and visibility.
An Enhanced trusted Image Storing and Retrieval Framework in Cloud Data Stora...IJERA Editor
Today’s image capturing technologies are producing High Definitional-scale images which are also heavier on memory, which has prompted many users into cloud storage, cloud computing is an service based technology and one of the cloud service is Data Storage as a Service (DSaaS), two parties are involved in this service the Cloud Service Provider and The User, user stores his vital data onto the cloud via internet example: Dropbox. but a bigger question is on trustiness over the CSP by user as user data is stored remote devices which user has no clue about, in such situation CSP has to create a trust worthiness to the costumer or user, in these paper we addressed the mention insecurity issue with a well defined trusted image Storing and retrieval framework (TISR) using compress sensing methodology.
This survey propose a Novel Joint Data-Hiding and
Compression Scheme (JDHC) for digital images using side match
vector quantization (SMVQ) and image in painting. In this
JDHC scheme image compression and data hiding scheme are
combined into a single module. On the client side, the data should
be hided and compressed in sub codebook such that remaining
block except left and top most of the image. The data hiding and
compression scheme follows raster scanning order i.e. block by
block on row basis. Vector Quantization used with SMVQ and
Image In painting for complex block to control distortion and
error injection. The receiver side process is based on two
methods. First method divide the received image into series of
blocks the receiver achieve hided data and original image
according to the index value in the segmented block. Second
method use edge based harmonic in painting is used to get
original image if any loss in the image.
This document discusses techniques for effective compression of digital video. It introduces several key algorithms used in video compression, including discrete cosine transform (DCT) for spatial redundancy reduction, motion estimation (ME) for temporal redundancy reduction, and embedded zerotree wavelet (EZW) transforms. DCT is used to compress individual video frames by removing spatial correlations within frames. Motion estimation compares blocks of pixels between frames to find and encode motion vectors rather than full pixel values, reducing file size. Combined, these techniques can achieve high compression ratios while maintaining high video quality for storage and transmission.
Implementation of Fractal Image Compression on Medical Images by Different Ap...ijtsrd
FIC Fractal Image Processing is actually a JPG image which needs to be perform large scale encoding to improve and increase the compression ratio. In this paper we are going to analyze different constraints and algorithms of image processing in MATLAB so that there will be very low loss in image quality. It has been seen that whenever we contain any HD picture that to in medical application we need to sharpen the image that is we need to perform image encryption, image de noising and there should be no loss in image quality. In this paper, we actualized both consecutive and parallel adaptation of fractal picture pressure calculations utilizing programming show for parallelizing the program in Graphics Processing Unit for medicinal pictures, as they are very comparable inside the picture itself. Whenever we consider an image into fractal image, it has great importance and application in image processing field. In this paper compression scheme is used to sharpen and smoothen of image by using various image processing algorithm. There are a few enhancements in the usage of the calculation too. Fractal picture pressure is based on the self closeness of a picture, which means a picture having closeness in dominant part of the locales. We accept this open door to execute the pressure calculation and screen the impact of it utilizing both parallel and successive execution. Fractal pressure has the property of high pressure rate and the dimensionless plan. Pressure plot for fractal picture is of two kind, one is encoding and another is deciphering. Encoding is especially computational costly. Then again interpreting is less computational. The use of fractal pressure to restorative pictures would permit acquiring a lot higher pressure proportions. While the fractal amplification an indivisible element of the fractal pressure would be helpful in showing the recreated picture in an exceedingly meaningful structure. Be that as it may, similar to all irreversible strategies, the fractal pressure is associated with the issue of data misfortune, which is particularly troublesome in the therapeutic imaging. A very tedious encoding process, which can last even a few hours, is another troublesome downside of the fractal pressure. Mr. Vaibhav Vijay Bulkunde | Mr. Nilesh P. Bodne | Dr. Sunil Kumar ""Implementation of Fractal Image Compression on Medical Images by Different Approach"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23768.pdf
Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/23768/implementation-of-fractal-image-compression-on-medical-images-by-different-approach/mr-vaibhav-vijay-bulkunde
Novel hybrid framework for image compression for supportive hardware design o...IJECEIAES
Performing the image compression over the resource constrained hardware is quite a challenging task. Although, there has been various approaches being carried out towards image compression considering the hardware aspect of it, but still there are problems associated with the memory acceleration associated with the entire operation that downgrade the performance of the hardware device. Therefore, the proposed approach presents a cost effective image compression mechanism which offers lossless compression using a unique combination of the non-linear filtering, segmentation, contour detection, followed by the optimization. The compression mechanism adapts analytical approach for significant image compression. The execution of the compression mechanism yields faster response time, reduced mean square error, improved signal quality and significant compression ratio performance.
Conference Proceedings of the National Level Technical Symposium on Emerging Trends in Technology, TECHNOVISION ’10, G.N.D.E.C. Ludhiana, Punjab, India- 9th-10th April, 2010
Evaluation Of Proposed Design And Necessary Corrective ActionSandra Arveseth
1. The document discusses the evaluation of a proposed satellite image design project and necessary corrective actions.
2. The objectives of the project are to construct a land cover classification taxonomy, classify satellite images by type (e.g. vegetation, buildings, water), and use MapReduce to process large amounts of satellite image data.
3. Satellite images play a major role in event detection like changing landscapes, monitoring glaciers, and detecting disasters. The project aims to detect land changes over time, store and classify the data, and retrieve it using defined mechanisms.
This document compares the performance of three lossless image compression techniques: Run Length Encoding (RLE), Delta encoding, and Huffman encoding. It tests these algorithms on binary, grayscale, and RGB images to evaluate compression ratio, storage savings percentage, and compression time. The results found that Delta encoding achieved the highest compression ratio and storage savings, while Huffman encoding had the fastest compression time. In general, the document evaluates and compares the performance of different lossless image compression algorithms.
A Step Towards A Scalable Dynamic Single Assignment ConversionSandra Long
The document discusses two new methods for transforming a program into dynamic single assignment (DSA) form. DSA form is important for enabling optimizations in data-intensive programs through a methodology called Data Transfer and Storage Exploration (DTSE). Existing DSA conversion methods have limitations in terms of scalability and applicability. The two new methods presented in the document aim to overcome these problems. Both methods add copy operations to simplify the data flow, enabling a simpler DSA conversion. However, copy propagation is then used to remove unnecessary copy operations and obtain an optimized DSA form of the program suitable for DTSE optimizations.
Similar to IMAGE COMPRESSION AND DECOMPRESSION SYSTEM (20)
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
1. INTEGRATED PROJECT REPORT
On
IMAGE COMPRESSION AND DECOMPRESSION
SYSTEM
Submitted in partial fulfillment of the requirement for the Course
Integrated Project-II (CSP2204) of
COMPUTER SCIENCE AND ENGINEERING
Batch-2014
in
May-2016
Under the Guidance of: Submitted By:
Ms. Ankita Tuteja Vishesh
1411981272
Shubham Kansal
1411981225
Vipin Kumar
1411981265
Vikrant Chaudhary
1411981261
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
CHITKARA UNIVERSITY
HIMACHAL PRADESH
2. ii
CERTIFICATE
This is to be certified that the project entitled “Image Compression and Decompression
System” has been submitted for the Bachelor of Computer Science Engineering at Chitkara
University, Himachal Pradesh during the academic semester January 2016- May 2016 is
bonafide piece of project work carried out by Vishesh (1411981272), Shubham Kansal
(1411981225), Vipin Kumar (1411981265) and Vikrant Chaudhary (1411981261) towards the
partial fulfillment for the award of course Integrated Project (CSP-2204) under the guidance of
Ms. Ankita Tuteja and supervision.
Signature of Project Guide
Ms. Ankita Tuteja
(Assistant Professor, CSE Department)
3. iii
CANDIDATE’S DECLARATION
We, Vishesh (1411981272), Shubham Kansal (1411981225), Vipin Kumar (1411981265) and
Vikrant Chaudhary (1411981261), B.E.-2014 of the Chitkara University, Himachal Pradesh
hereby declare that the Integrated Project Report entitled “Image Compression and
Decompression System” is an original work and data provided in the study is authentic to the
best of our knowledge. This report has not been submitted to any other Institute for the award
of any other course.
Sign. of Student 1 Sign. of Student 2 Sign. of Student 3 Sign. of Student 4
Vishesh Shubham Kansal Vipin Kumar Vikrant Chaudhary
1411981272 1411981225 1411981265 1411981261
Place:
Date:
4. iv
ABSTRACT
The compression and decompression of continuous-tone images is important in document
management and transmission systems. Image compression is the application of Data
compression on digital images. In effect, the objective is to reduce redundancy of the image
data in order to be able to store or transmit data in an efficient form. Therefore, the theory of
data compression becomes more and more significant for reducing the data redundancy to
save more hardware space and transmission bandwidth. In computer science and information
theory, data compression or source coding is the process of encoding information using fewer
bits or other information-bearing units than an unencoded representation. Compression is
useful because it helps reduce the consumption of expensive resources such as hard disk space
or transmission bandwidth and Decompression is useful when we take quality of images is
taken into consideration. In this project, we briefly introduce both image compression and
image decompression.
5. v
ACKNOWLEDGEMENT
It is our pleasure to be indebted to various people, who directly or indirectly contributed in the
development of this work and who influenced our thinking, behavior and acts during the
course of study.
We express our sincere gratitude to all for providing an opportunity to undergo Integrated
Project-II as the part of the curriculum.
We are thankful to “Ms. Ankita Tuteja” for her support, cooperation, and motivation provided
to us during the training for constant inspiration, presence and blessings.
Lastly, We would like to thank the almighty and our parents for their moral support and
friends with whom we shared our day-to day experience and received lots of suggestions that
improve our quality of work.
With Sincere Thanks,
Vishesh (1411981272), Shubham (1411981225), Vipin (1411981265), Vikrant (1411981261)
6. vi
TABLE OF CONTENTS
Page No.
Abstract iv
Acknowledgement v
List of Tables viii
List of Figures ix
Chapter 1: Introduction 1
1.1 Concept 2
1.2 Need of Compression 2
1.3 Image Compression 3
1.3.1 Advantages of Image Compression
1.3.2 Disadvantages of Image Compression
1.4 Image Decompression 3
1.4.1 Advantages of Image Decompression
1.4.2 Disadvantages of Image Decompression
1.5 Techniques used for Image Compression and De-Compression 4
Chapter 2: Literature Survey 8
Chapter 3: Methodology 10
3.1 Data Flow Diagram 12
3.2 Class Hierarchy 13
3.3 Software and Hardware Requirements 13
3.4 Layout of the project 14
3.4.1 Netbeans Part
3.4.2 Database Connectivity
Chapter 4: Modules 16
4.1 Compression Module 17
4.2 Decompression Module 21
7. vii
Chapter 5: Results and Snapshots 22
5.1 Classes 23
5.2 Functions 28
Chapter 6: Conclusion and Future Scope 31
6.1 Conclusion 32
6.2 Future Scope 32
References 33
11. 2
INTRODUCTION
1.1 Concept :
In recent years, the development and demand of multimedia product grows increasingly fast,
contributing to insufficient bandwidth of network and storage of memory device. Therefore,
the theory of data compression becomes more and more significant for reducing the data
redundancy to save more hardware space and transmission bandwidth. In computer science
and information theory, data compression or source coding is the process of encoding
information using fewer bits or other information-bearing units than an unencoded
representation. Compression is useful because it helps reduce the consumption of expensive
resources such as hard disk space or transmission bandwidth and Decompression is useful
when we take quality of images is taken into consideration. In this project, we briefly
introduce both image compression and image decompression.
1.2 Need of Compression :
Compression means to convert a data of more memory storage to the lesser one. It is useful to
visit or re-visit the reasons why data compression is needed or useful.
One obvious reason is to save the cost of disk. While this may seem strange given that the
disks are cheap. First reason, for the uninitiated, is that the disks used for high-end systems are
not cheap. Secondly, there is rarely a single copy of the production data.
Second reason is the cost of managing the data. Larger the database, it takes longer to do the
backup, recovery.
Third reason is memory. Don’t we all wish we had more memory on our servers? Well, if the
data is compressed, you can fit more data in the same memory. So if you could compress the
data 50%, then suddenly you have increased your memory 100% (i.e. you can fit double the
size of the data). Is this not fantastic? Even if you have 64-bit machine with the capability for
huge amount of addressable memory, the databases, for most customers, is many order of
magnitude larger than the memory. So compression will benefit even for servers running on
64-bit architecture. Clearly your IO bound workloads are likely to see increase in
throughput with data compression as there is less data to be read.
12. 3
1.3 Image Compression :
Image compression is an application of data compression that encodes the original image with
few bits. The objective of image compression is to reduce the redundancy of the image and to
store or transmit data in an efficient form. Image compression is minimizing the size in bytes
of a graphics file without degrading the quality of the image to an unacceptable level. It
involves minimization of the number of information carrying units, pixels. This means that an
image where adjacent pixels have almost the same values leads to spatial redundancy. The
reduction in file size allows more images to be stored in a given amount of disk or memory
space. It also reduces the time required for images to be sent over the Internet or downloaded
from Web pages.
1.3.1 Advantages of Image Compression :
Less disk space(more data in reality).
The quantity of bits used to store the data is reduced.
Faster insertion and deletion .
Faster file transfer.
Byte order independent.
Can zip up several small files into a single file
1.3.2 Disadvantages of Image Compression :
Added complication.
Effect of errors in transmission.
Slower for sophisticated methods.
Unknown byte/pixel relationship.
Need to decompress all previousdata.
1.4 Image De-Compression :
The major goal of Image decompression is to image decompression is to decode and
reconstruct the original image. It is an application to get a much better image in terms of
13. 4
quality or size and also we can get original images from their compressed form where we need
that the quality of the image is high whether it may be of higher size.
1.4.1 Advantages of Image De-Compression :
Quality of the picture is maintained.
More advantageous where HD pictures are required.
Need not to decompress further.
Fine byte/pixel relationship.
1.4.2 Disadvantages of Image Compression :
Requires more disk space.
The quantity of bits used to store the data is increased.
Slower insertion and deletion .
Slower file transfer.
1.5 Techniques used for Image Compression and De-Compression :
There are several different ways in which image files can be compressed. For Internet use, the
two most common compressed graphic image formats are the JPEG format and the GIF
format. The JPEG method is more often used for photographs, while the GIF method is
commonly used for line art and other images in which geometric shapes are relatively simple.
Other techniques for image compression include the use of fractals and wavelets. These
methods have not gained widespread acceptance for use on the Internet as of this writing.
However, both methods offer promise because they offer higher compression ratios than the
JPEG or GIF methods for some types of images. Another new method that may in time
replace the GIF format is the PNG format.
It is very important to understand that there are two basic types of compression: lossless and
lossy. As the names imply, with lossless compression we will be able to reduce the number of
bytes required to represent the image without losing any information or contents; and with
lossy compression some information will be lost (discarded) in the compression.
14. 5
A text file or program can be compressed without the introduction of errors, but only up to a
certain extent. This is called lossless compression. Beyond this point, errors are introduced. In
text and program files, it is crucial that compression be lossless because a single error can seriously
damage the meaning of a text file, or cause a program not to run. In image compression, a small loss in
quality is usually not noticeable. There is no "critical point" up to which compression works perfectly,
but beyond which it becomes impossible. When there is some tolerance for loss, the compression
factor can be greater than it can when there is no loss tolerance. For this reason, graphic images can be
compressed more than text files or programs.
One may ask why use lossy compression if computers are fast and smart and can deal with
complex algorithms for perfect compression and decompression. Since we are dealing with
images, in most cases losing some information will not compromise the visual quality of the
image and may yield much smaller file sizes. Partly this is because the human visual system is
not as accurate as we would like, and at the same time can easily deal with small details, often
ignoring them (can you really distinguish between the different 16777216 colors that can be
represented in the RGB digital color space?). In images we can expect some regions to have
almost the same color, and we've seen that repetition can lead to good compression.
Image processing algorithms basically use the redundant or almost-redundant information on
images to greatly reduce the number of bytes required to represent them. If a lossless format or
compression method is used, the algorithm will preserve the contents so it can be reproduced
exactly as the original, at the expense of a larger number of bytes to represent it. If a lossy
compression method is used, the algorithm will represent the image with fewer bytes but the
decompressed image will be slightly different from the original -- and for most applications
that will not matter. Exceptions are, of course, applications where the image must be stored
with the original content for further manipulation -- classification comes to mind.
Lossy compression algorithms often allow the user to set a value for the compression level for
the image, allowing the balance between quality and size. Few lossless algorithms allows this,
but often the trade is between better compression and faster processing -- better compression
requiring more processing time.
16. 7
Image Compression using Java:
Using predefined ImageWriter class in java.
Here we use predefined class ImageWriteParam.
setCompressionmode() function is used.
Fig 1.3 (a) Original Image Fig 1.3 (b) Compressed Image
18. 9
LITERATURE SURVEY
Data compression has only played a significant role in computing since the 1970s, when the
Internet was becoming more popular and the Lempel-Ziv algorithms were invented, but it has
a much longer history outside of computing. Morse code, invented in 1838, is the earliest
instance of data compression in that the most common letters in the English language such as
“e” and “t” are given shorter Morse codes. Later, as mainframe computers were starting to
take hold in 1949, Claude Shannon and Robert Fano invented Shannon-Fano coding. Their
algorithm assigns codes to symbols in a given block of data based on the probability of the
symbol occuring. The probability is of a symbol occuring is inversely proportional to the
length of the code, resulting in a shorter way to represent the data. Then many years passed
and images are also going to be used in most of the applications and there is a need to
compress those images so as to make the applications light. Many algorithms were designed
and implemented for the same like DCT algorithm, Huffman encoding technique used to
compress images. Our project is also doing the same. What we have taken into consideration
is to compress the image in terms of size by specifying the quality( high/normal/low). The
results we obtained after compressing are not that much poor but are quite similar to the
original image. Sometimes we need to send the file completely without any problem in that
case we can reduce the quality of the file. Reducing quality of images makes file transfer
process very easy and fast as they are of much bigger size. Today in this present time we are
not available with a lot of time/we want quick responses from every application and we have
tried to reduce application response time by compressing images in this project.
Image Compression is achieved by removing the redundancy in the image. Redundancies in
the image can be classified into three categories; inter-pixel or spatial redundancy, psycho-
visual redundancy and coding redundancy. Inter-pixel Redundancy: Natural images have high
degree of correlation among its pixels. This correlation is referred as inter-pixel redundancy or
spatial redundancy and is removed by either predictive coding or transform coding. Psycho-
visual redundancy: Images are normally meant for consumption of human eyes, which does
not respond with equal sensitivity to all visual information. The relative relevancy of various
19. 10
image information components can be exploited to eliminate or reduce any amount of data that
is psycho-visually redundant. The process, which removes or reduces Psycho-visual
redundancy, is referred as quantization. Coding redundancy: variable-length codes matching to
the statistical model of the image or its processed version exploits the coding redundancy in
the image. Integral University, April 2014 PhD Thesis: Effective Image Compression for
Wireless Sensor Networks Naimur Rahman Kidwai 13 Lossy compression: An Image may be
lossy compressed by removing information, which are not redundant but irrelevant (psycho
visual redundancy). Lossy-compression introduces certain amount of distortion during
compression, resulting in more compression efficiency.
Anil Kumar et al. in their paper two image compression techniques namely, DCT and DWT
are simulated. They concluded that DWT technique is much efficient than DCT in quality and
efficiency wise but in performance time wise DCT is better than DWT. Swastik Das et al.
presented DWT and DCT transformations with their working. They concluded that image
compression is of prime importance in Real time applications like video conferencing where
data are transmitted through a channel. Using JPEG standard, DCT is used for mapping which
reduces the inter pixel redundancies followed by quantization which reduces the psycho visual
redundancies then coding redundancy is reduced by the use of optimal code word having
minimum average length. In JPEG 2000 standard of image compression DWT is used for
mapping, all other methods remaining same. They analysed that DWT is more general and
efficient than DCT.
22. 13
3.2 Class Hierarchy :
o java.lang.Object
o java.awt.Component (implements java.awt.image.ImageObserver,
java.awt.MenuContainer, java.io.Serializable)
o java.awt.Container
o java.awt.Window (implements javax.accessibility.Accessible)
o java.awt.Frame (implements java.awt.MenuContainer)
o javax.swing.JFrame (implements
javax.accessibility.Accessible,
javax.swing.RootPaneContainer,
javax.swing.WindowConstants)
o image.compression.ChangePassword
o image.compression.Decompression
(implements
java.awt.event.ActionListener)
o image.compression.fetchuser
o image.compression.Imageupload
o image.compression.Login
o image.compression.MyAccount
o image.compression.MyImages
o image.compression.NewPass
o image.compression.SecurityQ
o image.compression.signup
o image.compression.Welcome
image.compression.myconnection
3.3 Software & Hardware Requirements :
Software requirements :
operating system :Windows 7/ Windows 8/ Windows 10
Languages : Java
Tools : Net Beans, Xampp
Hardware requirements :
Processor : 600 MHz or above.
RAM (SD/DDR) : 256 MB
Hard Disc : 30GB
23. 14
3.4 Layout of the project :
3.4.1 Netbeans part:
First we have a frame through which we can sign up or login. In the frame we will take
username and password as inputs from user and then on clicking the login button the
application checks whether that username or password exists in the database or not. If
the username and password exists then the application will redirect the user to the
account page else it will display a message saying wrong username and password. On
the same frame we also have a button for signup through which user can create his/her
account if he/she not registered yet and we have also implemented the show password
and forgot password functionalities on the same frame. In the main page we have two
buttons named Compression and Decompression and the application will redirect the
user to the respective pages on clicking them. In Compression module we will load the
image on clicking the browse button and then upload the image in its original and
compressed form both in the database on clicking the upload button. On the same page
itself we have another button named myimages that will show all the images in the
form of tiles that user has uploaded in the database and then we can click on the image
to download it in its compressed. In Decompression module we have a dropdown list
listing the names of all the images that user has uploaded in the database and then we
can click on the button named Decompression to get the decompressed image. This is
all about this desktop application.
3.4.2 Database Connectivity:
Java Database Connectivity (JDBC) is an application programming interface (API) for
the programming language Java, which defines how a client may access a database. It
is part of the Java Standard Edition platform, from Oracle Corporation. It provides
methods to query and update data in a database, and is oriented towards relational
databases.
JDBC allows multiple implementations to exist and be used by the same application.
The API provides a mechanism for dynamically loading the correct Java packages and
24. 15
registering them with the JDBC Driver Manager. The Driver Manager is used as a
connection factory for creating JDBC connections.
JDBC connections support creating and executing statements. These may be update
statements such as SQL's CREATE, INSERT, UPDATE and DELETE, or they may be
query statements such as SELECT.
Query statements return a JDBC row result set. The row result set is used to walk over
the result set. Individual columns in a row are retrieved either by name or by column
number. There may be any number of rows in the result set. The row result set has
metadata that describes the names of the columns and their types.
There is an extension to the basic JDBC API in the javax.sql. If a database operation
fails, JDBC raises an SQLException.
We have two tables under the database image compression:
1. Account(Id, Username, Password, SecurityQ, SecurityA)
2. Tbimages(Id, Username, Title, Original Compressed)
Account
Field Type Null Default Extra
ID Int(11) No None Auto_Increment
Username Varchar(20) No None
Password Varchar(20) No None
SecurityQ Varchar(50) No None
SecurityA Varchar(50) No None
Table 3.1 User Account
Tbimages
Field Type Null Default Extra
ID Int(11) No None Auto_Increment
Username Varchar(20) No None
Title Varchar(100) No None
Original Longblob No None
Compressed Longblob No None
Table 3.2 Image Database
26. 17
MODULES
4.1 Compression Module :
Compression of image file is one of the important task when it comes to save the large number
of image files. It saves lot of space if you could compress the images when it is necessary. An
image can easily be compressed and stored through Java. Compression of image involves
converting an image into jpg and storing it. In order to compress an image, we read the image
and convert into BufferedImage object. Further, we get an ImageWriter from
getImageWritersByFormatName() method found in the ImageIO class. From this
ImageWriter, create an ImageWriteParam object. ImageWriteParam class is mainly used for
compression the images. Its syntax is given below:
Iterator<ImageWriter> list = ImageIO.getImageWritersByFormatName("jpg");
ImageWriteParam obj = writer_From_List.getDefaultWriteParam();
It has two methods which is mostly used for changing the compression settings:
setCompressionQuality() method used for setting the quality of the compressed image the
value between 0 to 1. A compression quality setting of 0.0 is most generically interpreted as
“high compression is important,” while a setting of 1.0 is most generically interpreted as “high
image quality is important.” Also we used method setCompressionMode() to set the mode of
the compression. The setCompressionMode() method takes Mode_EXPLICIT as the
parameter.
Some of the other modes are described briefly:
MODE_DEFAULT
It is a constant value that may be passed into methods to enable that feature for future
writes.
MODE_DISABLED
It is a constant value that may be passed into methods to disable that feature for future
writes.
27. 18
MODE_EXPLICIT
It is a constant value that may be passed into methods to enable that feature for future
writes.
Apart from the compressions methods, there are other methods provided by the
ImageWriteParam class. They are described briefly:
canOffsetTiles()
It returns true if the writer can perform tiling with non-zero grid offsets while writing.
getBitRate(float quality)
It returns a float indicating an estimate of the number of bits of output data for each bit
of input image data at the given quality level.
getLocale()
It returns the currently set Locale, or null if only a default Locale is supported.
isCompressionLossless()
It returns true if the current compression type provides lossless compression.
unsetCompression()
It removes any previous compression type and quality settings.
unsetTiling()
It removes any previous tile grid parameters specified by calls to setTiling.
Output:
When you execute the code, it compresses the image to its equivalent compressed image
according to the choosen quality factor and writes it on the hard disk with the
name compress.jpg.
28. 19
Original Image:
Fig 4.1 Original Image
Compressed Image- Quality Factor: 0.5
Fig 4.2 Compressed Image- Quality Factor: 0.5
30. 21
4.2 Decompression Module :
It is an application to get a much better image in terms of quality or size and also we can get
original images from their compressed form where we need that the quality of the image is
high whether it may be of higher size.
Output:
Compressed Image:
Fig 4.5 Compressed Image
Decompressed Image:
Fig 4.6 Decompressed Image
32. 23
RESULTS AND SNAPSHOTS
5.1 Classes :
MyConnection
This class is being used for database connectivity. It is building a connection between
database server and java project.
Fig 5.1.1 Database Connectivity
Login
It is jframe which acts as the login page of the project. It provides access to the project
to the registered users by identifying and authenticating themselves. This frame links
to three different frames:
MyAccount
ForgotPassword: In case users forgots his/her password he/she can change the
password by entering username and answering the security question set while
signing up
SignUp: For new users to register themselves and create their account.
34. 25
MyAccount
This frame refers to the user account page and it contains to buttons that links to two
different modules: compression and decompression. User can choose any of the
functionality according to the need. Clicking on “compression” button will redirect the
user to compression page where user can upload images and compress it choosing
some quality according to the requirement. Clicking on “decompression” button will
redirect the user to decompression page where user can select uploaded and
compressed images to decompress and save it on the system. It also contains a button
with an option of change user password where user can change his/her password.
Fig 5.1.4 User Account Page Fig 5.1.5 Change Password Page
Compression
This frame contains the functionality of compression module. In this frame user
browse for the image on system and upload the image on the database. Before
uploading user needs to choose the quality of image after compression. Size of image
after compression depends on the quality choosen, i.e, choosing high quality will result
in compressed image bigger in size than that with normal or low quality. As soon as
the user clicks on the upload button compression process takes place and the original
image and compressed image gets stored in the database.
35. 26
Fig 5.1.6 Compression Page
MyImages
Using this frame user can choose uploaded and compressed images and can save it on
his/her system by selecting the image and clicking on download button on joptionpane.
Compressed images stored in the database are displayed on this frame in in form of
tiles with the help of jtable and its functions.
Fig 5.1.7 jTable
36. 27
Fig 5.1.8 My Images Page
Decompression
This frame consists of decompression part. In this frame user is allowed to choose
image from the the list of previously uploaded images. When user clicks on
“decompress” button he/she needs to choose the path where he/she wants to store the
decompressed image on the system and the extention in which he wants to write the
image.Choosing the correct extenstion is important as it decides the quality of image.
Fig 5.1.9 jFileChooser Syntax
37. 28
Fig 5.1.10 Decompression Page
5.2 Functions :
Compression and Decompression
In order to compress an image, we read the image and convert into BufferedImage
object. Further, we get an ImageWriter from getImageWritersByFormatName()
method found in the ImageIO class. From this ImageWriter, create an
ImageWriteParam object.
Fig 5.2.1 ImageWriter Syntax
38. 29
Upload Image
We convert a BufferedImage to byte array in order to send it to server. We use Java
class ByteArrayOutputStream, which can be found underjava.io package. Its syntax is
given below:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(image, "jpg", baos);
In order to convert the image to byte array, we use toByteArray() method of
ByteArrayOutputStream class. Its syntax is given below:
byte[] bytes = baos.toByteArray();
Fig 5.2.2 Image Upload Syntax
Download Image
To get the image from the databse and download it on system follwing functins are
executed:
Fig 5.2.3 Image Download Syntax
39. 30
When you execute the code given above, output something like this seen:
Fig 5.2.3 Image Download Output
41. 32
CONCLUSION AND FUTURE SCOPE
6.1 Conclusion
Data compression is very important in the computing world and it is commonly used by many
applications, including the suite of SyncBack programs. In providing a brief overview on how
compression works in general it is hoped this project allows users of data compression to
weigh the advantages and disadvantages.
6.2 Future Scope
In this project, many of the current important image compression and encryption techniques
have been presented and analyzed. The best way of fast and secure transmission is by using
compression and encryption of multimedia data like images. The compression technique
observed is either lossy or lossless. Always lossless compression is preferred but to achieve
secrecy some image quality degradation is accepted. Encryption applied by different
researchers by means of encrypting algorithm which encrypt the entire or partial multimedia
bit sequence using a fast conventional cryptosystem. Much of the past and current research
targets encrypting only a carefully selected part of the image bitstream in order to reduce the
computational load, and yet keep the security level high. In the proposed approach the key is
required to send separately. This is a different issue of securely transmitting the secret key.
Future scope of the proposed work is that we can design the mechanism to securely transmit
the key so that unauthorized person should have no access to it.