The growing trend of online image sharing and downloads today mandate the need for better encoding and
decoding scheme. This paper looks into this issue of image coding. Multiple Description Coding is an
encoding and decoding scheme that is specially designed in providing more error resilience for data
transmission. The main issue of Multiple Description Coding is the lossy transmission channels. This work
attempts to address the issue of re-constructing high quality image with the use of just one descriptor
rather than the conventional descriptor. This work compare the use of Type I quantizer and Type II
quantizer. We propose and compare 4 coders by examining the quality of re-constructed images. The 4
coders are namely JPEG HH (Horizontal Pixel Interleaving with Huffman Coding) model, JPEG HA
(Horizontal Pixel Interleaving with Arithmetic Encoding) model, JPEG VH (Vertical Pixel Interleaving
with Huffman Encoding) model, and JPEG VA (Vertical Pixel Interleaving with Arithmetic Encoding)
model. The findings suggest that the use of horizontal and vertical pixel interleavings do not affect the
results much. Whereas the choice of quantizer greatly affect its performance.
BLIND EXTRACTION OF DIGITAL WATERMARKING ALGORITHM FOR COLOR IMAGESijma
Digital watermark technology hides copyright information in digital images, effectively protecting the
copyright of digital images. At present, the color image digital watermarking algorithm still has defects
such as the inability to balance robustness, invisibility and the weak anti-attack ability. Aiming at the
above problems, this paper studies the digital watermarking method based on discrete wavelet transform
and discrete cosine transform. Then this paper proposes a color image blind digital watermarking
algorithm based on QR code. First, convert the color image from RGB space to YCbCr space, extract the Y
component and perform the second-level discrete wavelet transform. secondly, block the LL2 subband and
perform the discrete cosine transform. finally, use the embedding method to convert the watermark
information after the Arnold transform embedded in the block. The experimental results show that the
PSNR of the color image embedded with the QR code is 56.7159 without being attacked. After being
attacked, its PSNR and NC values are respectively 30dB and 0.95 or more, which proves that the algorithm
has good robustness and can achieve watermarking blind extraction.
A Novel Optimum Technique for JPEG 2000 Post Compression Rate Distortion Algo...IDES Editor
The new technique we proposed in this paper based
on Hidden Markov Model in the field of post compression rate
distortion algorithms certainly meet the requirements of high
quality still images. The existing technology has been
extensively applied in modern image processing. Development
of image compression algorithms is becoming increasingly
important for obtaining a more informative image from several
source images captured by different modes of imaging systems
or multiple sensors. The JPEG 2000 image compression
standard is very sensitive to errors. The JPEG2000 system
provides scalability with respect to quality, resolution and
color component in the transfer of images. But some of the
applications need certainly qualitative images at the output
ends. In our architecture the Proto-object also introduced as
the input ant bit rate allocation and rate distortion has been
discussed for the output image with high resolution. In this
paper, we have also discussed our novel response dependent
condensation image compression which has given scope to go
for this post compression Rate Distortion Algorithm (PCRD)
of JPEG 2000 standard. This proposed technique outperforms
the existing methods in terms of increasing efficiency,
optimum PSNR values at different bpp levels. The proposed
technique involves Hidden Markov Model to meet the
requirements for higher scalability and also to increase the
memory storage capacity.
Abstract: The increasing amount of applications using digital multimedia technologies has accentuated the need to provide copyright protection to multimedia data. This paper reviews one of the data hiding techniques - digital image watermarking. Through this paper we will explore some basic concepts of digital image watermarking techniques.Two different methods of digital image watermarking namely spatial domain watermarking and transform domain watermarking are briefly discussed in this paper. Furthermore, two different algorithms for a digital image watermarking have also been discussed. Also the comparision between the different algorithms,tests performed for the robustness and the applications of the digital image watermarking have also been discussed.
A High Performance Modified SPIHT for Scalable Image CompressionCSCJournals
In this paper, we present a novel extension technique to the Set Partitioning in Hierarchical Trees (SPIHT) based image compression with spatial scalability. The present modification and the preprocessing techniques provide significantly better quality (both subjectively and objectively) reconstruction at the decoder with little additional computational complexity. There are two proposals for this paper. Firstly, we propose a pre-processing scheme, called Zero-Shifting, that brings the spatial values in signed integer range without changing the dynamic ranges, so that the transformed coefficient calculation becomes more consistent. For that reason, we have to modify the initialization step of the SPIHT algorithms. The experiments demonstrate a significant improvement in visual quality and faster encoding and decoding than the original one. Secondly, we incorporate the idea to facilitate resolution scalable decoding (not incorporated in original SPIHT) by rearranging the order of the encoded output bit stream. During the sorting pass of the SPIHT algorithm, we model the transformed coefficient based on the probability of significance, at a fixed threshold of the offspring. Calling it a fixed context model and generating a Huffman code for each context, we achieve comparable compression efficiency to that of arithmetic coder, but with much less computational complexity and processing time. As far as objective quality assessment of the reconstructed image is concerned, we have compared our results with popular Peak Signal to Noise Ratio (PSNR) and with Structural Similarity Index (SSIM). Both these metrics show that our proposed work is an improvement over the original one.
Rate Distortion Performance for Joint Source Channel Coding of JPEG image Ove...CSCJournals
This paper presents the rate distortion behavior of Joint Source Channel Coding (JSCC) scheme for still image transmission. The focus is on DCT based Source coding JPEG, Rate Compatible Punctured Convolution codes (RCPC) for transmission over Additive White Gaussian Noise (AWGN) channel under the constraint of fixed transmission bandwidth. Information transmission has a tradeoff between compression ratio and received quality of image. The compressed stream is more susceptible to channel errors, thus error control coding techniques are used along with images to minimize the effect of channel errors. But there is a clear tradeoff between channel coding redundancies versus source quality with constant channel bit rate. This paper proposes JSCC scheme based on Unequal Error Protection (UEP) for robust image transmission. With the conventional error control coding schemes that uses Equal Error Protection (EEP), all the information bits are equally protected. The use of the UEP schemes provides a varying amount of error protection according to the importance of the data. The received image quality can be improved using UEP compared to Equal Error Protection (EEP).
BLIND EXTRACTION OF DIGITAL WATERMARKING ALGORITHM FOR COLOR IMAGESijma
Digital watermark technology hides copyright information in digital images, effectively protecting the
copyright of digital images. At present, the color image digital watermarking algorithm still has defects
such as the inability to balance robustness, invisibility and the weak anti-attack ability. Aiming at the
above problems, this paper studies the digital watermarking method based on discrete wavelet transform
and discrete cosine transform. Then this paper proposes a color image blind digital watermarking
algorithm based on QR code. First, convert the color image from RGB space to YCbCr space, extract the Y
component and perform the second-level discrete wavelet transform. secondly, block the LL2 subband and
perform the discrete cosine transform. finally, use the embedding method to convert the watermark
information after the Arnold transform embedded in the block. The experimental results show that the
PSNR of the color image embedded with the QR code is 56.7159 without being attacked. After being
attacked, its PSNR and NC values are respectively 30dB and 0.95 or more, which proves that the algorithm
has good robustness and can achieve watermarking blind extraction.
A Novel Optimum Technique for JPEG 2000 Post Compression Rate Distortion Algo...IDES Editor
The new technique we proposed in this paper based
on Hidden Markov Model in the field of post compression rate
distortion algorithms certainly meet the requirements of high
quality still images. The existing technology has been
extensively applied in modern image processing. Development
of image compression algorithms is becoming increasingly
important for obtaining a more informative image from several
source images captured by different modes of imaging systems
or multiple sensors. The JPEG 2000 image compression
standard is very sensitive to errors. The JPEG2000 system
provides scalability with respect to quality, resolution and
color component in the transfer of images. But some of the
applications need certainly qualitative images at the output
ends. In our architecture the Proto-object also introduced as
the input ant bit rate allocation and rate distortion has been
discussed for the output image with high resolution. In this
paper, we have also discussed our novel response dependent
condensation image compression which has given scope to go
for this post compression Rate Distortion Algorithm (PCRD)
of JPEG 2000 standard. This proposed technique outperforms
the existing methods in terms of increasing efficiency,
optimum PSNR values at different bpp levels. The proposed
technique involves Hidden Markov Model to meet the
requirements for higher scalability and also to increase the
memory storage capacity.
Abstract: The increasing amount of applications using digital multimedia technologies has accentuated the need to provide copyright protection to multimedia data. This paper reviews one of the data hiding techniques - digital image watermarking. Through this paper we will explore some basic concepts of digital image watermarking techniques.Two different methods of digital image watermarking namely spatial domain watermarking and transform domain watermarking are briefly discussed in this paper. Furthermore, two different algorithms for a digital image watermarking have also been discussed. Also the comparision between the different algorithms,tests performed for the robustness and the applications of the digital image watermarking have also been discussed.
A High Performance Modified SPIHT for Scalable Image CompressionCSCJournals
In this paper, we present a novel extension technique to the Set Partitioning in Hierarchical Trees (SPIHT) based image compression with spatial scalability. The present modification and the preprocessing techniques provide significantly better quality (both subjectively and objectively) reconstruction at the decoder with little additional computational complexity. There are two proposals for this paper. Firstly, we propose a pre-processing scheme, called Zero-Shifting, that brings the spatial values in signed integer range without changing the dynamic ranges, so that the transformed coefficient calculation becomes more consistent. For that reason, we have to modify the initialization step of the SPIHT algorithms. The experiments demonstrate a significant improvement in visual quality and faster encoding and decoding than the original one. Secondly, we incorporate the idea to facilitate resolution scalable decoding (not incorporated in original SPIHT) by rearranging the order of the encoded output bit stream. During the sorting pass of the SPIHT algorithm, we model the transformed coefficient based on the probability of significance, at a fixed threshold of the offspring. Calling it a fixed context model and generating a Huffman code for each context, we achieve comparable compression efficiency to that of arithmetic coder, but with much less computational complexity and processing time. As far as objective quality assessment of the reconstructed image is concerned, we have compared our results with popular Peak Signal to Noise Ratio (PSNR) and with Structural Similarity Index (SSIM). Both these metrics show that our proposed work is an improvement over the original one.
Rate Distortion Performance for Joint Source Channel Coding of JPEG image Ove...CSCJournals
This paper presents the rate distortion behavior of Joint Source Channel Coding (JSCC) scheme for still image transmission. The focus is on DCT based Source coding JPEG, Rate Compatible Punctured Convolution codes (RCPC) for transmission over Additive White Gaussian Noise (AWGN) channel under the constraint of fixed transmission bandwidth. Information transmission has a tradeoff between compression ratio and received quality of image. The compressed stream is more susceptible to channel errors, thus error control coding techniques are used along with images to minimize the effect of channel errors. But there is a clear tradeoff between channel coding redundancies versus source quality with constant channel bit rate. This paper proposes JSCC scheme based on Unequal Error Protection (UEP) for robust image transmission. With the conventional error control coding schemes that uses Equal Error Protection (EEP), all the information bits are equally protected. The use of the UEP schemes provides a varying amount of error protection according to the importance of the data. The received image quality can be improved using UEP compared to Equal Error Protection (EEP).
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Comparison of different Fingerprint Compression Techniquessipij
The important features of wavelet transform and different methods in compression of fingerprint images have been implemented. Image quality is measured objectively using peak signal to noise ratio (PSNR) and mean square error (MSE).A comparative study using discrete cosine transform based Joint Photographic Experts Group(JPEG) standard , wavelet based basic Set Partitioning in Hierarchical trees(SPIHT) and Modified SPIHT is done. The comparison shows that Modified SPIHT offers better compression than basic SPIHT and JPEG. The results will help application developers to choose a good wavelet compression system for their applications.
IMAGE COMPRESSION AND DECOMPRESSION SYSTEMVishesh Banga
Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
A New Technique to Digital Image Watermarking Using DWT for Real Time Applica...IJERA Editor
Digital watermarking is an essential technique to add hidden copyright notices or secret messages to digital audio, image, or image forms. In this paper we introduce a new approach for digital image watermarking for real time applications. We have successfully implemented the digital watermarking technique on digital images based on 2-level Discrete Wavelet Transform and compared the performance of the proposed method with Level-1 and Level-2 and Level-3 Discrete Wavelet Transform using the parameter peak signal to noise ratio. To make the watermark robust and to preserve visual significant information a 2-Level Discrete wavelet transform used as transformation domain for both secret image and original image. The watermark is embedded in the original image using Alpha blending technique and implemented using Matlab Simulink.
Use of Wavelet Transform Extension for Graphics Image Compression using JPEG2...CSCJournals
The new image compression standard JPEG2000, provides high compression rates for the same visual quality for gray and color images than JPEG. JPEG2000 is being adopted for image compression and transmission in mobile phones, PDA and computers. An image may contain the formatted text and graphics data. The compression performance of the JPEG2000 behaves poorly when compressing an image with low color depth such as graphics images. In this paper, we propose a technique to distinguish the true color images from graphics images and to compress graphics images using a simplified JPEG2000 compression method that will improve the compression performance. This method can be easily adapted in image compression applications without changing the syntax of compressed stream.
Image Authentication Using Digital Watermarkingijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Imperceptible and secure image watermarking using DCT and random spread techn...TELKOMNIKA JOURNAL
Watermarking is a copyright protection technique, while cryptography is a message encoding
technique. Imperceptibility, robustness, and safety are aspects that are often investigated in watermarking.
Cryptography can be implemented to increase watermark security. Beaufort cipher is the algorithm
proposed in this research to encrypt watermark. The new idea proposed in this research is the utilization of
Beaufort key for watermark encryption process as well as for spread watermark when inserted as PN
Sequence substitute with the aim to improve imperceptibility and security aspects. Where PN Sequence is
widely used in spread spectrum watermarking technique. Based on the experimental results and testing of
the proposed method proved that imperceptibility and watermark security are increased. Improved
imperceptibility measured by PSNR rose by about 5dB and so did the MSE score better. Robustness
aspect is also maintained which has been proven by the excellent value of NCC.
High Speed and Area Efficient 2D DWT Processor Based Image Compressionsipij
This paper presents a high speed and area efficient DWT processor based design for Image Compression applications. In this proposed design, pipelined partially serial architecture has been used to enhance the speed along with optimal utilization and resources available on target FPGA. The proposed model has been designed and simulated using Simulink and System Generator blocks, synthesized with Xilinx Synthesis tool (XST) and implemented on Spartan 2 and 3 based XC2S100-5tq144 and XC3S500E-4fg320 target device. The results show that proposed design can operate at maximum frequency 231 MHz in case of Spartan 3 by consuming power of 117mW at 28 degree/c junction temperature. The result comparison has shown an improvement of 15% in speed.
Secured Data Transmission Using Video Steganographic SchemeIJERA Editor
Steganography is the art of hiding information in ways that avert the revealing of hiding messages. Video Steganography is focused on spatial and transform domain. Spatial domain algorithm directly embedded information in the cover image with no visual changes. This kind of algorithms has the advantage in Steganography capacity, but the disadvantage is weak robustness. Transform domain algorithm is embedding the secret information in the transform space. This kind of algorithms has the advantage of good stability, but the disadvantage of small capacity. These kinds of algorithms are vulnerable to steganalysis. This paper proposes a new Compressed Video Steganographic scheme. The data is hidden in the horizontal and the vertical components of the motion vectors. The PSNR value is calculated so that the quality of the video after the data hiding is evaluated.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Comparison of different Fingerprint Compression Techniquessipij
The important features of wavelet transform and different methods in compression of fingerprint images have been implemented. Image quality is measured objectively using peak signal to noise ratio (PSNR) and mean square error (MSE).A comparative study using discrete cosine transform based Joint Photographic Experts Group(JPEG) standard , wavelet based basic Set Partitioning in Hierarchical trees(SPIHT) and Modified SPIHT is done. The comparison shows that Modified SPIHT offers better compression than basic SPIHT and JPEG. The results will help application developers to choose a good wavelet compression system for their applications.
IMAGE COMPRESSION AND DECOMPRESSION SYSTEMVishesh Banga
Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
A New Technique to Digital Image Watermarking Using DWT for Real Time Applica...IJERA Editor
Digital watermarking is an essential technique to add hidden copyright notices or secret messages to digital audio, image, or image forms. In this paper we introduce a new approach for digital image watermarking for real time applications. We have successfully implemented the digital watermarking technique on digital images based on 2-level Discrete Wavelet Transform and compared the performance of the proposed method with Level-1 and Level-2 and Level-3 Discrete Wavelet Transform using the parameter peak signal to noise ratio. To make the watermark robust and to preserve visual significant information a 2-Level Discrete wavelet transform used as transformation domain for both secret image and original image. The watermark is embedded in the original image using Alpha blending technique and implemented using Matlab Simulink.
Use of Wavelet Transform Extension for Graphics Image Compression using JPEG2...CSCJournals
The new image compression standard JPEG2000, provides high compression rates for the same visual quality for gray and color images than JPEG. JPEG2000 is being adopted for image compression and transmission in mobile phones, PDA and computers. An image may contain the formatted text and graphics data. The compression performance of the JPEG2000 behaves poorly when compressing an image with low color depth such as graphics images. In this paper, we propose a technique to distinguish the true color images from graphics images and to compress graphics images using a simplified JPEG2000 compression method that will improve the compression performance. This method can be easily adapted in image compression applications without changing the syntax of compressed stream.
Image Authentication Using Digital Watermarkingijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Imperceptible and secure image watermarking using DCT and random spread techn...TELKOMNIKA JOURNAL
Watermarking is a copyright protection technique, while cryptography is a message encoding
technique. Imperceptibility, robustness, and safety are aspects that are often investigated in watermarking.
Cryptography can be implemented to increase watermark security. Beaufort cipher is the algorithm
proposed in this research to encrypt watermark. The new idea proposed in this research is the utilization of
Beaufort key for watermark encryption process as well as for spread watermark when inserted as PN
Sequence substitute with the aim to improve imperceptibility and security aspects. Where PN Sequence is
widely used in spread spectrum watermarking technique. Based on the experimental results and testing of
the proposed method proved that imperceptibility and watermark security are increased. Improved
imperceptibility measured by PSNR rose by about 5dB and so did the MSE score better. Robustness
aspect is also maintained which has been proven by the excellent value of NCC.
High Speed and Area Efficient 2D DWT Processor Based Image Compressionsipij
This paper presents a high speed and area efficient DWT processor based design for Image Compression applications. In this proposed design, pipelined partially serial architecture has been used to enhance the speed along with optimal utilization and resources available on target FPGA. The proposed model has been designed and simulated using Simulink and System Generator blocks, synthesized with Xilinx Synthesis tool (XST) and implemented on Spartan 2 and 3 based XC2S100-5tq144 and XC3S500E-4fg320 target device. The results show that proposed design can operate at maximum frequency 231 MHz in case of Spartan 3 by consuming power of 117mW at 28 degree/c junction temperature. The result comparison has shown an improvement of 15% in speed.
Secured Data Transmission Using Video Steganographic SchemeIJERA Editor
Steganography is the art of hiding information in ways that avert the revealing of hiding messages. Video Steganography is focused on spatial and transform domain. Spatial domain algorithm directly embedded information in the cover image with no visual changes. This kind of algorithms has the advantage in Steganography capacity, but the disadvantage is weak robustness. Transform domain algorithm is embedding the secret information in the transform space. This kind of algorithms has the advantage of good stability, but the disadvantage of small capacity. These kinds of algorithms are vulnerable to steganalysis. This paper proposes a new Compressed Video Steganographic scheme. The data is hidden in the horizontal and the vertical components of the motion vectors. The PSNR value is calculated so that the quality of the video after the data hiding is evaluated.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
International Journal on Soft Computing ( IJSC )ijsc
A variety of new and powerful algorithms have been developed for image compression over the years.
Among them the wavelet-based image compression schemes have gained much popularity due to their
overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG
compression and multiresolution character which leads to superior energy compaction with high quality
reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding
techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree
(SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding
with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet
Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR)
algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image
Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and
the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and
discussions are presented for algorithm development and implementation.
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...ijcsa
Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. There are different ways of abbreviating image files. For the use of Internet, the two most common abbreviated graphic image formats are the JPEG formulation and the GIF formulation. The JPEG procedure is more often utilized or
photographs, while the GIF method is commonly used for logos, symbols and icons but at the same time
they are not preferred as they use only 256 colors. Other procedures for image compression include the
utilization of fractals and wavelets. These procedures have not profited widespread acceptance for the
utilization on the Internet. Abbreviating an image is remarkably not similar than the compressing raw
binary data. General-purpose abbreviation techniques can be utilized to compress images, the obtained
result is less than the optimal. This is because of the images have certain analytical properties, which can
be exploited by encoders specifically designed only for them. Also, some of the finer details of the image
can be renounced for the sake of storing a little more bandwidth or deposition space. In the paper,
compression is done on medical image and the compression technique that is used to perform compression
is discrete wavelet transform and discrete cosine transform which compresses the data efficiently without
reducing the quality of an image
Modified Skip Line Encoding for Binary Image Compressionidescitation
Image Compression is an important issue in
Internet, mobile communication, digital library, digital
photography, multimedia, teleconferencing and other
applications. Application areas of Image Compression would
focus on the problem of optimizing storage space and
transmission bandwidth. In this paper, a modified form of skip
line encoding is proposed to further reduce the redundancy in
the image. The performance is found to be better than the
skip-line encoding.
Design of Image Compression Algorithm using MATLABIJEEE
This paper gives the idea of recent developments in the field of image security and improvements in image security. Images are used in many applications and to provide
image security using image encryption and authentication.
Image encryption techniques scramble the pixels of the image and decrease the correlation among the pixels, such that the encrypted image cannot be accessed by unauthorized user.
Digital image compression is a modern technology which comprises of wide range of use in different fields as in machine learning, medicine, research and many others. Many techniques exist in image processing. This paper aims at the analysis of compression using Discrete Cosine Transform (DCT) by using special methods of coding to produce enhanced results. DCT is a technique or method used to transform pixels of an image into elementary frequency component. It converts each pixel value of an image into its corresponding frequency value. There has to be a formula that has to be used during compression and it should be reversible without losing quality of the image. These formulae are for lossy and lossless compression techniques which are used in this project. The research test Magnetic Resonance Images (MRI) using a set of brain images. During program execution, original image will be inserted and then some algorithms will be performed on the image to compress it and a decompressing algorithm will execute on the compressed file to produce an enhanced lossless image.
A Comprehensive lossless modified compression in medical application on DICOM...IOSR Journals
ABSTRACT : In current days, Digital Imaging and Communication in Medicine (DICOM) is widely used for
viewing medical images from different modalities, distribution and storage. Image processing can be processed
by photographic, optical and electronic means, because digital methods are precise, fast and flexible, image
processing using digital computers are the most common method. Image Processing can extract information,
modify pictures to improves and change their structure (image editing, composition and image compression
etc.). Image compression is the major entities of storage system and communication which is capable of
crippling disadvantages of data transmission and image storage and also capable of reducing the data
redundancy. Medical images are require to stored for future reference of the patients and their hospital findings
hence, the medical image need to undergo the process of compression before storing it. Medical images are
much important in the field of medicine, all these Medical image compression is necessary for huge database
storage in Medical Centre and medical data transfer for the purpose of diagnosis. Presently Discrete cosine
transforms (DCT), Run Length Encoding Lossless compression technique, Wavelet transforms (DWT), are the
most usefully and wider accepted approach for the purpose of compression. On basis of based on discrete
wavelet transform we present a new DICOM based lossless image compression method. In the proposed
method, each DICOM image stored in the data set is compressed on the basis of vertically, horizontally and
diagonally compression. We analyze the results from our study of all the DICOM images in the data set using
two quality measures namely PSNR and RMSE. The performance and comparison was made over each images
stored in the set of data set of DICOM images. This work is presenting the performance comparison between
input images (without compression) and after compression results for each images in the data set using DWT
method. Further the performance of DWT method with HAAR process is compared with 2D-DWT method using
the quality metrics of PSNR & RMSE. The performance of these methods for image compression has been
simulated using MATLAB.
Keywords: JPEG, DCT, DWT, SPIHT, DICOM, VQ, Lossless Compression, Wavelet Transform, image
Compression, PSNR, RMSE
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
In this paper, we implemented a new model of image compression and decompression method to search the aimed image based on the robust image block variance estimation. Many methods of image compression have been proposed in the literature to minimize the error rate and compression ratio. For encoding the medium type of images, traditional models use hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. In this proposed work, we have implemented block estimation and image distortion rate to optimize the compression ration and to minimize the error rate. Experimental results show that proposed model gives a high compression rate and less rate compared to traditional models.
Patch-Based Image Learned Codec using Overlappingsipij
End-to-end learned image and video codecs, based on auto-encoder architecture, adapt naturally to image resolution, thanks to their convolutional aspect. However, while coding high resolution images, these codecs face hardware problems such as memory saturation. This paper proposes a patch-based image coding solution based on an end-to-end learned model, which aims to remedy to the hardware limitation while maintaining the same quality as full resolution image coding. Our method consists in coding overlapping patches of the image and reconstructing them into a decoded image using a weighting function. This approach manages to be on par with the performance of full resolution image coding using an endto-end learned model, and even slightly outperforms it, while being adaptable to different memory sizes. Moreover, this work undertakes a full study on the effect of the patch size on this solution’s performance, and consequently determines the best patch resolution in terms of coding time and coding efficiency. Finally, the method introduced in this work is also compatible with any learned codec based
on a conv/deconvolutional autoencoder architecture without having to retrain the model.
Jpeg image compression using discrete cosine transform a surveyIJCSES Journal
Due to the increasing requirements for transmission of images in computer, mobile environments, the
research in the field of image compression has increased significantly. Image compression plays a crucial
role in digital image processing, it is also very important for efficient transmission and storage of images.
When we compute the number of bits per image resulting from typical sampling rates and quantization
methods, we find that Image compression is needed. Therefore development of efficient techniques for
image compression has become necessary .This paper is a survey for lossy image compression using
Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image
applications and describes all the components of it.
Similar to ANALYSING JPEG CODING WITH MASKING (20)
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Securing your Kubernetes cluster_ a step-by-step guide to success !
ANALYSING JPEG CODING WITH MASKING
1. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
DOI : 10.5121/ijma.2016.8104 45
ANALYSING JPEG CODING WITH MASKING
S.-Y. YEO and *Y.-Y. NGUWI
School of Business (IT), James Cook University Singapore
ABSTRACT
The growing trend of online image sharing and downloads today mandate the need for better encoding and
decoding scheme. This paper looks into this issue of image coding. Multiple Description Coding is an
encoding and decoding scheme that is specially designed in providing more error resilience for data
transmission. The main issue of Multiple Description Coding is the lossy transmission channels. This work
attempts to address the issue of re-constructing high quality image with the use of just one descriptor
rather than the conventional descriptor. This work compare the use of Type I quantizer and Type II
quantizer. We propose and compare 4 coders by examining the quality of re-constructed images. The 4
coders are namely JPEG HH (Horizontal Pixel Interleaving with Huffman Coding) model, JPEG HA
(Horizontal Pixel Interleaving with Arithmetic Encoding) model, JPEG VH (Vertical Pixel Interleaving
with Huffman Encoding) model, and JPEG VA (Vertical Pixel Interleaving with Arithmetic Encoding)
model. The findings suggest that the use of horizontal and vertical pixel interleavings do not affect the
results much. Whereas the choice of quantizer greatly affect its performance.
KEYWORDS
Jpeg, image, coding, masking, mdc
1. INTRODUCTION
The use of internet and mobile networks increases exponentially. Transporting image and video
over the internet while guaranteeing content and speed is a challenging problem. The difficulty
can be attributed to the size of image and video are often much larger than other forms of
transmission. When an image or video is transmitted through internet, it is first encoded into
several packets using progressive encoding scheme like Joint Photographic Expert Group (JPEG).
Encoded packets are then delivered using Transmission Control Protocol (TCP), the standard
protocol that controls retransmission of lost packets. When a transient error occurs, decoder fail to
decode and re-construct the original signal due to loss of packets.
Fast Multiple Description Coding (FMDC) is an encoding and decoding scheme that is specially
designed in providing more error resilience for data transmission. It encodes a source into two or
more independent bit streams known as descriptor and each descriptor carries crucial information
and features of original source. This ensures that receiver is able to recover the original data with
certain level of quality even if some descriptions are lost. Quality improves when more
descriptions are received and decoded. As these transmission channels are often unreliable, the
popularity of MDC coding has increased over the past few years. Multiple Description Coding
typically consists of two phases: encoding phase and decoding phase. During encoding phase, the
system divides and compresses an input image and develop descriptions to be transmitted through
different channels. At decoding phase, if one of the descriptor is received, it will be decoded and
original image is re-constructed with a low, but acceptable quality. When more descriptors are
received, the image can be better constructed and produce a higher quality image.
The main issue of Multiple Description Coding is the lossy transmission channels. This work
attempts to address the issue of re-constructing high quality image with the use of just one
descriptor rather than the conventional two descriptors. This paper is organized into several
2. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
46
sections: Section II presents some literatures related to image coding or transforms. Section III
presents the system design and results.
2. LITERATURE REVIEW
This section reviews and summarizes some of the common techniques utilized by various
researchers trying to improve the current technology. It focuses on two dimensional techniques
like Two Dimensional Discrete Cosine Transform (2D DCT), Two Dimensional Discrete Wavelet
Transform (2D DWT), Multiple Description Scalar Quantizer (MDSQ), Entropy Encoding and
etc.
2.1. Two Dimensional Discrete Cosine Transform (2D DCT)
Discrete Cosine Transform (DCT) [1, 2] was first discovered and developed by Ahmed et. al. [3]
in 1974. It is a technique that converts an input signal (e.g. Image’s pixel values) into frequency
domain. DCT is widely used in image compression like JPEG, which is the first international
standard for still image compression established in 1992. After transforming, the important
information of an image concentrates at low frequency area. The less important information
presents in higher frequency are then discarded. Human eyes are not sensitive to high frequency
component. The file size of an image is reduced, however the quality is still acceptable. The
disadvantage of 2D DCT is the introduction of blocking artefacts due to discarding of high
frequency component during compression [4].
2.2. Two Dimensional Discrete Wavelet Transform (2D DWT)
Due to the blocking effects mentioned in previous section, JPEG introduced another type of
image compression standard which is called JPEG2000 that is based on wavelet. Haar wavelet is
used. The 2D Haar wavelet works by down sampling input image twice. First time along the
rows, then along the columns. The second time using another sub-band. The 2D DWT has gained
worldwide recognition in Digital Signal Processing because of the advantages listed below.
• Two dimensional DWT offers better compression rate and better re-construction of image
quality compared to 2D DCT.
• It does not have blocking artefacts.
• Rate of compression is scalable with multiple transformation levels.
Although 2D DWT is a better compression technique compared to 2D DCT, still many people
choose to go for 2D DCT because the implementation of 2D DWT in terms of hardware and
software is far more expensive and complex than 2D DCT [5].
2.3. Pixel Interleaving (PI)
Pixel interleaving (PI) is a technique that divides an image into two or more coarse sub-images
and is usually employed before transformation blocks like 2D DCT or 2D DWT. It produce
multiple descriptions for transmission of image. At the decoder side, due to the high correlations
between adjacent pixels, original image can be re-constructed by interpolation method even if
some of the sub-images are lost during transmission. Sub-images can be formed in four ways:
horizontal pixel interleaving, vertical pixel interleaving, diagonal pixel interleaving, and dynamic
pixel interleaving. Dynamic pixel interleaving produced better result [6], but extra bits are
needed. Horizontal and vertical methods are simple, but results are not ideal due to the separation
of adjacent pixels of an image. Overall, pixel interleaving is simple to be integrated into other
image coding technique, the disadvantage is that if the lost rate of transmission channel is high,
the quality of re-constructed image is relatively poor [7].
3. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
47
2.4. Quantization
Quantization is a technique used to reduce the number of bits required to represent an image. It is
a lossy compression technique employed in source encoder. Compression of an image can be
accomplished by removing redundant information within the image itself, namely spatial
redundancy, spectral redundancy, and temporal redundancy. For quantizer, there are scalar
quantizer and multiple description scalar quantizer (MDSQ). Scalar quantizer rounds a floating
point real value into an integer value depending on the step size. MDSQ generates two coarse
descriptions simultaneously with every input image. It aims to re-construct the input image with
the highest quality if both descriptions arrived at the decoder side successfully. As shown in
Figure 1 is the basic architecture of encoding system using MDSQ.
The performance of MDSQ depends on the spread of index assignment which in turn determines
the amount of redundancy that a particular index assignment possesses [8].
Figure 1. Basic architecture of MDSQ encoding system.
2.5. Entropy Coding
Entropy coding is usually employed quantization to further compress quantized value for better
compression ratio. It is a lossless compression scheme that creates and assigns a unique codeword
to every unique symbol (pixel value) that occurs within an input image. Entropy coding assigns
codeword according to the number of occurrence of a symbol. Symbol that occurs most
frequently will be assigned the shortest codeword. Types of entropy coding includes Huffman
Coding [9] and Arithmetic Coding [10]. Huffman coding compresses an image by encoding the
original data into shorter code, each unique symbol is then given a unique prefix code and length
of this code depends on the frequency of occurrence of that particular symbol. These prefix codes
are then transmitted for decoding purposes over at the receiver side. Arithmetic coding uses
symbol together with probability value that ranges from 0 to 1. The probability denotes the
frequency of occurrence.
3. THE SYSTEM AND RESULTS
3.1. JPEG Coding
We assess the performances of the proposed system using compression ratio (CR), mean squared
error (MSE) as a measurement of distortion, peak signal to noise ratio (PSNR) and processing
time. The higher the compression ratio the image quality would usually be poorer. Higher PSNR
measures better quality. A balance of CR and PSNR have to be achieved for optimum image
quality. We used three commonly used image processing gray scale, namely cameraman, Lena,
and pears. They are of different types, jpg, tif and png. The size and dimensions are listed in
Figure 2
4. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
48
Figure 2. Images used in experiment
Figure 3. JPEG HH encoding model (Horizontal PI and Huffman Encoding)
Figure 4. JPEG HA encoding model (Horizontal PI and Arithmetic Encoding)
Figure 5. JPEG VH encoding model (Vertical PI and Huffman Encoding)
5. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
49
Figure 6. JPEG VA encoding model (Vertical PI and Arithmetic Encoding)
Figure 7. JPEG decoding
Figure 8. Horizontal Pixel Interleaving
. (a) Original Image (512x512) (b) SubImage 1 (256x512) (c) Sub-Image 2 (256x512)
Figure 9. Vertical Pixel Interleaving.
(a) Original Image (512x512) (b) SubImage 1 (512x256) (c) Sub-Image 2 (512x256)
6. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
50
Figure 3 illustrates JPEG HH encoding model using Horizontal Pixel Interleaving and Huffman
Encoding. Input image is first interleaved into two sub-images. Figure 4 denotes JPEG HA
encoding model using Horizontal Pixel Interleaving and Arithmetic Encoding. Figure 5 JPEG VH
model uses Vertical Pixel Interleaving and Huffman Encoding whereas Figure 6 presents the
JPEG VA coder using Vertical Pixel Interleaving and Arithmetic Encoding. The sub-images are
then fed into 2D DCT for transformation, the transformed images are then quantized into smaller
file size. The quantized coefficients are then entropy encoded individually to enhance bandwidth
efficiency. Two descriptors D1 and D2 are created for transmission.
Figure 7 illustrates our JPEG decoding system. Decoding reverses the encoding process. Once the
system received D1 and D2 descriptors, they will be entropy decoded, de-quantized, and
undergone inverse DCT to retrieve back the original image. The interpolation part is needed for
the case of only 1 descriptor is received due to transmission error.
The steps for encoding are summarized below:
Step 1: Input image is interleaved into two sub-images.
Step 2: Each sub-image is divided into many 8x8 blocks of pixels.
Step 3: Apply 2D DCT to every block
Step 4: Convert points to integer through scalar quantizer for bytes reduction.
Step 5: Each block is compressed through 2D DCT quantizer to re-presenting original image.
Step 6: After quantization, both sub-images are entropy encoded individually as a whole instead
of blocks in generating two descriptions to be transmitted.
As shown in Figure 8 is the original image being interleaved into two sub-images using horizontal
pixel interleaving approach. The vertical pixel interleavings are shown in Figure 9. In the next
step, the sub-images are then divided into blocks of 8x8 matrixes and each block is transformed
into frequency domain for quantization. Figure 10 shows a sample blocks original input and
transformed into DCT domain where many of the values are converted into zeros or near zeros.
This step has reduced the file size significantly due to this reason. Figure 11 shows the result of
using same sub-image without dividing into blocks of 8x8 for 2D DCT. It shows clearly that the
block-based method improves the compression ratio by using lesser bytes to represent the same
data.
The last step is usually the step where distortion or image information loss are observed. We
propose the use of 2 types of quantizer. We propose the use of masking for Type I quantizer.
Type II quantizer utilizes JPEG quantizer. Type I masking quantizer takes advantage of the
important property of 2D DCT where important information of an image concentrates around
lower frequency area. The low frequency components are usually concentrating at the top left
area of the matrix, and high frequency components are mostly at the bottom right area of the
matrix. We could then improve the compression ratio. We utilized two types of masker, mask8
and mask16. Mask8 keeps the top left 8 coefficients from 2D DCT. Mask16 keeps the top left 16
coefficients from 2D DCT. Each 8x8 block of sub-image is multiplied by either one of them for
compression. If a matrix is convolved by mask8, only 8 transformed coefficients remains while
the rest are discarded, i.e. only 12.5% (8/64 * 100%). On the other hand, mask16 maintains 25%
(16/64 * 100) of the original information. Type II quantizer compresses image by keeping lower
frequency components of transformed coefficient. This is done by multiplying the blocks of
transformed coefficients with an 8x8 quantization matrix. This matrix allows us to decide on the
quality level of the re-constructed image ranging from the level of 1 to 100. Level 1 gives the
highest compression but lowest re-constructed quality. On the other hand, level 100 gives the
lowest compression but highest quality.
7. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
51
Lastly, Huffman entropy coding was utilized. As shown in Figure 12 are the original matrix and
results after inverse transformation. It can be seen that there are high similarity between the 3
matrices in (a), (b) and (c) especially the top left elements of transformed coefficient. This is in-
line with the earlier discussion on keeping the lower frequency components in 2D DCT
.
Figure 10. JPEG decoding
Figure 11. Transformed coefficient for block based DCT
Figure 12. Results after inverse transformed with masking
(a) original matrix (b) results from Mask 8 (c) results from Mask 16.
3.2. Performance Comparison
We assess the performances of the proposed system using Compression Ratio (CR), Mean
Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). Table 1 illustrates the results for
JPEG decoder using Horizontal Pixel Interleaving and Huffman Encoding with mask8 and
mask16. In terms of compression ratio (CR), the performances are within expectation. Mask8
compresses better than mask16 because mask8 uses lesser bits of information. The image type of
png has been compressed the most due to its original larger size. In terms of length of codewords,
both mask8 and mask16 have about the same size across the three types of images. Whereas for
mean square error (MSE), the difference between using 1 descriptor and 2 descriptors do not
8. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
52
differ significantly. The MSE for tif image is much higher than the other 2 types. The other
observations to note is the MSE difference between 1 descriptor and 2 descriptors are no more
than 10%. This shows that the approach is suitable for even low bandwidth transmission. The
signal to noise ratio is better for png image. The use of 1 descriptor and 2 descriptors do not
appear to be significant. Our approach achives a good PSNR. The CR can be improved through
fine-tuning our quantizer. The experiment also shows that our technique has been getting good
results with the use of png files compression. The file size could be shrinked down by 18 times
without sacrificing PSNR.
Table 2 shows the results of JPEG HA coder using horizontal pixel interleaving and arithmetic
encoding. This coder measures the quality of re-constructed image ranging from level 1 to 100.
Level 1 gives the highest compression but lowest re-constructed quality. On the other hand, level
100 gives the lowest compression but highest quality. Hence we measures the bits needed for
level 50 and 99. For level 50, the highest compression achieved is 33 times. On the other hand,
the best quality level of 99 only obtained 5 times compression for the png image. The MSE for
this model appear to be a lot larger than the previous model. The PSNR for all three types of
images are lower than the previous model as well.
Table 3 shows the results of JPEG VH coder using vertical pixel interleaving and Huffman
encoding. The compression ratio is quite similar to HH. From this table, the MSE for JPG and
PNG images are lower and quite similar to JPEG HH decoder. This further suggests that the MSE
for Huffman coding is much lower than Arithmetic coding. Table IV shows the results of JPEG
VA coder using vertical pixel interleaving and Huffman encoding. With the use of vertical pixel
interleaving and Huffman encoding, the results are similar to JPEG HA model, higher MSE and
lower PSNR.
The experimental results also show that the use of horizontal pixel interleaving and vertical pixel
interleaving give similar results. The difference between the two is less than 1dB for PSNR. Both
methods are recommended for 2D DCT system. The other observation is on the choice of
quantizers. The compression ratio for type II quantizer (in use for JPEG HA and VA model) is
around twice better than type I quantizer ( in use for JPEG HH and VH model), the PSNR
achieved is around 1.5 times poorer. Type II quantizer compresses an image by dividing every 2D
DCT coefficients by a constant. The low frequency area is affected and hence the quality of re-
constructed image. Type I quantizer successfully keep lower frequency coefficients without
affecting it.
TABLE 1. JPEG HH CODER RESULTS (Horizontal PI and Huffman Encoding)
9. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
53
TABLE 2. JPEG HA CODER RESULTS (Horizontal PI and Arithmetic Encoding)
TABLE 3. JPEG VH CODER RESULTS (Vertical PI and Huffman Encoding)
TABLE 4. JPEG VA CODER (Vertical PI and Arithmetic Encoding)
4. CONCLUSION
This paper started with introducing Multiple Description Coding and its associated issues. In
literature review, we looked at techniques like Two Dimensional Discrete Cosine Transform (2D
DCT), Two Dimensional Discrete Wavelet Transform (2D DWT), Multiple Description Scalar
Quantizer (MDSQ), Entropy Encoding and etc. We then proposed a system comparing 3 different
types of images, jpg, png, and tiff. The system adopted 2D DCT with masking quantization, we
used two types of masks: the mask8 and mask16. Mask8 compresses better than mask16 because
mask8 uses lesser bits of information. The image type of png has been compressed the most due
to its original larger size. In terms of length of codewords, both mask8 and mask16 have about the
10. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.1, February 2016
54
same size across the three types of images. Whereas for mean square error (MSE), the difference
between using 1 descriptor and 2 descriptors do not differ significantly. The other observations to
note is the MSE difference between 1 descriptor and 2 descriptors are no more than 10%. This
shows that the approach is suitable for even low bandwidth transmission. The experimental
results also show that the use of horizontal pixel interleaving and vertical pixel interleaving give
similar results. The difference between the two is less than 1dB for PSNR. Both methods are
recommended for 2D DCT system. The other observation is on the choice of quantizers. The
compression ratio for type II quantizer (in use for JPEG HA and VA model) is around twice
better than type I quantizer ( in use for JPEG HH and VH model), the PSNR achieved is around
1.5 times poorer. Type II quantizer compresses an image by dividing every 2D DCT coefficients
by a constant. The low frequency area is affected and hence the quality of re-constructed image.
Type I quantizer successfully keep lower frequency coefficients without affecting it.
REFERENCES
[1] Andrew B. Watson,‘Image Compression Using the Discrete Cosine Transform’, Mathematica
Journal Page(s): 81-88. 1994
[2] Subhasis Saha,‘Image Compression – from DCT to Wavelets: A Review’, Crossroads, 6(3), March
2000. Pp. 12-21.
[3] “A brief review of DCT-based JPEG and few novel wavelet-based image coding algorithms in ACM
Crossroads”, http://www.acm.org/crossroads/xrds6-3/sahaimgcoding.html, access 27 Oct 2014.
[4] Ahmed, N., Natarajan, T., & Rao, K. R. (1974),‘Discrete Cosine Transfom’, IEEE Transactions on
Computers, C-23(1), 90-93.
[5] Vaishampayan, V. A. (1993), ‘Design of multiple description scalar quantizers’, Information Theory,
IEEE Transactions on, 39(3), 821-834.
[6] Anhong, W., Yao, Z., & Jeng-Shyang, P., ‘Multiple Description Image Coding Based on DSC and
Pixel Interleaving’ Paper presented at the Intelligent Information Hiding and Multimedia Signal
Processing, 2008. IIHMSP '08 International Conference on. 2008.
[7] Zixiang, X., Ramchandran, K., Orchard, M. T., & Ya-Qin, Z. (1999), ‘A comparative study of DCT-
and wavelet-based image coding’, Circuits and Systems for Video Technology, IEEE Transactions
on, 9(5), 692-695.’
[8] DeBrunner, V., DeBrunner, L., & Longji, W. (1999)., 'Recovery of lost blocks by dynamic pixel
interleaving, Proceedings of the 1999 Paper presented at the Circuits and Systems, 1999. '99 IEEE
International Symposium on ISCAS.