Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
Modified Skip Line Encoding for Binary Image Compressionidescitation
Image Compression is an important issue in
Internet, mobile communication, digital library, digital
photography, multimedia, teleconferencing and other
applications. Application areas of Image Compression would
focus on the problem of optimizing storage space and
transmission bandwidth. In this paper, a modified form of skip
line encoding is proposed to further reduce the redundancy in
the image. The performance is found to be better than the
skip-line encoding.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
Image Compression Using Intra Prediction of H.264/AVC and Implement of Hiding...ijsrd.com
The current still image compression technique lacks the required level of standardization and still have something can be improve according to compression rate, computation, and so on. This paper employs the technique of H.264/MPEG-4 Advanced Video Coding to improve still image compression. The H.264/MPEG-4 standard promises much higher compression and quality compared to other existing standard, such as MPEG-4 and H.263. This paper utilizes the intra prediction approach of H.264/AVC and Huffman coding to improve the compression rate. Each 4x4 block is predicted by choosing the best mode out of the 9 different modes. The best prediction mode is selected by SAE (Sum of Absolute Error) method. Also this paper deals with an image hiding algorithm based on singular value decomposition algorithm. This paper propose a data hiding algorithm, applying on encoded bit-stream. Before embedding the secret image into cover image, the residue of the cover image is first calculated and encoded using Huffman coding. At the decoder side the secret image is extracted and the cover image is reconstructed with sufficient peak signal to noise ratio.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
In this paper, we implemented a new model of image compression and decompression method to search the aimed image based on the robust image block variance estimation. Many methods of image compression have been proposed in the literature to minimize the error rate and compression ratio. For encoding the medium type of images, traditional models use hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. In this proposed work, we have implemented block estimation and image distortion rate to optimize the compression ration and to minimize the error rate. Experimental results show that proposed model gives a high compression rate and less rate compared to traditional models.
Modified Skip Line Encoding for Binary Image Compressionidescitation
Image Compression is an important issue in
Internet, mobile communication, digital library, digital
photography, multimedia, teleconferencing and other
applications. Application areas of Image Compression would
focus on the problem of optimizing storage space and
transmission bandwidth. In this paper, a modified form of skip
line encoding is proposed to further reduce the redundancy in
the image. The performance is found to be better than the
skip-line encoding.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
Image Compression Using Intra Prediction of H.264/AVC and Implement of Hiding...ijsrd.com
The current still image compression technique lacks the required level of standardization and still have something can be improve according to compression rate, computation, and so on. This paper employs the technique of H.264/MPEG-4 Advanced Video Coding to improve still image compression. The H.264/MPEG-4 standard promises much higher compression and quality compared to other existing standard, such as MPEG-4 and H.263. This paper utilizes the intra prediction approach of H.264/AVC and Huffman coding to improve the compression rate. Each 4x4 block is predicted by choosing the best mode out of the 9 different modes. The best prediction mode is selected by SAE (Sum of Absolute Error) method. Also this paper deals with an image hiding algorithm based on singular value decomposition algorithm. This paper propose a data hiding algorithm, applying on encoded bit-stream. Before embedding the secret image into cover image, the residue of the cover image is first calculated and encoded using Huffman coding. At the decoder side the secret image is extracted and the cover image is reconstructed with sufficient peak signal to noise ratio.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
In this paper, we implemented a new model of image compression and decompression method to search the aimed image based on the robust image block variance estimation. Many methods of image compression have been proposed in the literature to minimize the error rate and compression ratio. For encoding the medium type of images, traditional models use hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. In this proposed work, we have implemented block estimation and image distortion rate to optimize the compression ration and to minimize the error rate. Experimental results show that proposed model gives a high compression rate and less rate compared to traditional models.
Efficient video compression using EZWTIJERA Editor
In this article, wavelet based lossy video compression algorithm is presented. The motion estimation and compensation, being an important part in the compression, is based on segment movements. The proposed work is based on wavelet transform algorithm Embedded Zeroed WaveletTransform (EZWT). Based on the results of peak signal to noise ratio (PSNR), mean squared error (MSE), different videos are analyzed. Maintaining the PSNR to acceptable limits the proposed EZWT algorithm achieves very good compression ratios making the technique more efficient than the 2-Discrete Cosine Transform (DCT) in the H.264/AVC codec. The method is being suitable for low bit rate video showing highest compression ratio and very good PSNR of more than 30dB.
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...Dr. Amarjeet Singh
Existing Medical imaging techniques such as fMRI, positron emission tomography (PET), dynamic 3D ultrasound and dynamic computerized tomography yield large amounts of four-dimensional sets. 4D medical data sets are the series of volumetric images netted in time, large in size and demand a great of assets for storage and transmission. Here, in this paper, we present a method wherein 3D image is taken and Discrete Wavelet Transform(DWT) and Dual-Tree Complex Wavelet Transform(DTCWT) techniques are applied separately on it and the image is split into sub-bands. The encoding and decoding are done using 3D-SPIHT, at different bit per pixels(bpp). The reconstructed image is synthesized using Inverse DWT technique. The quality of the compressed image has been evaluated using some factors such as Mean Square Error(MSE) and Peak-Signal to Noise Ratio (PSNR).
Halftoning-based BTC image reconstruction using patch processing with border ...TELKOMNIKA JOURNAL
This paper presents a new halftoning-based block truncation coding (HBTC) image reconstruction using sparse representation framework. The HBTC is a simple yet powerful image compression technique, which can effectively remove the typical blocking effect and false contour. Two types of HBTC methods are discussed in this paper, i.e., ordered dither block truncation coding (ODBTC) and error diffusion block truncation coding (EDBTC). The proposed sparsity-based method suppresses the impulsive noise on ODBTC and EDBTC decoded image with a coupled dictionary containing the HBTC image component and the clean image component dictionaries. Herein, a sparse coefficient is estimated from the HBTC decoded image by means of the HBTC image dictionary. The reconstructed image is subsequently built and aligned from the clean, i.e. non-compressed image dictionary and predicted sparse coefficient. To further reduce the blocking effect, the image patch is firstly identified as “border” and “non-border” type before applying the sparse representation framework. Adding the Laplacian prior knowledge on HBTC decoded image, it yields better reconstructed image quality. The experimental results demonstrate the effectiveness of the proposed HBTC image reconstruction. The proposed method also outperforms the former schemes in terms of reconstructed image quality.
Selection of intra prediction modes for intra frame coding in advanced video ...eSAT Journals
Abstract This paper proposes selection of Intra prediction modes for Intra frame coding in Advanced Video Coding Standard using Matlab. The proposed algorithm selects prediction modes for intra frame coding. There are nine prediction modes are there to predict the intra frame in AVC using Intra prediction,but all the prediction modes are not required for all the applications. Intra prediction is the first process of advanced video coding standard. It predicts a macro block by referring to its previous macro blocks to reduce spatial redundancy,appling all the prediction modes to predict intra frame it leads to more computational complexity is increased at the encoder of AVC. In the proposed algoriyhm, applied all the prediction modes(0-8) for prediction of intra frame but only few modes such as mode0, mode1, mode2,mode4,mode6 gives good PSNR, high comprssion ratio and low bit rate. Out of these modes mode2 gives good PSNR, compression ratio and redced bit rate, mode5, mode7 and mode8 gives lower PSNR, low compression ratio and increased bitrate compared to mode0,mode1, mode2, mode4 and mode6. The simulation results are presented using Matlab. The PSNR , compressed ratio and bit rate achived for different quantization parameters of mother daughter frames , foreman frames was presented. Keywords: AVC, PSNR, CAVLC, Macroblock, Prediction modes.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Super-Spatial Structure Prediction Compression of Medicalijeei-iaes
The demand to preserve raw image data for further processing has been increased with the hasty growth of digital technology. In medical industry the images are generally in the form of sequences which are much correlated. These images are very important and hence lossless compression Technique is required to reduce the number of bits to store these image sequences and take less time to transmit over the network The proposed compression method combines Super-Spatial Structure Prediction with inter-frame coding that includes Motion Estimation and Motion Compensation to achieve higher compression ratio. Motion Estimation and Motion Compensation is made with the fast block-matching process Inverse Diamond Search method. To enhance the compression ratio we propose a new scheme Bose, Chaudhuri and Hocquenghem (BCH). Results are compared in terms of compression ratio and Bits per pixel to the prior arts. Experimental results of our proposed algorithm for medical image sequences achieve 30% more reduction than the other state-of-the-art lossless image compression methods.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A systematic image compression in the combination of linear vector quantisati...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
Efficient video compression using EZWTIJERA Editor
In this article, wavelet based lossy video compression algorithm is presented. The motion estimation and compensation, being an important part in the compression, is based on segment movements. The proposed work is based on wavelet transform algorithm Embedded Zeroed WaveletTransform (EZWT). Based on the results of peak signal to noise ratio (PSNR), mean squared error (MSE), different videos are analyzed. Maintaining the PSNR to acceptable limits the proposed EZWT algorithm achieves very good compression ratios making the technique more efficient than the 2-Discrete Cosine Transform (DCT) in the H.264/AVC codec. The method is being suitable for low bit rate video showing highest compression ratio and very good PSNR of more than 30dB.
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...Dr. Amarjeet Singh
Existing Medical imaging techniques such as fMRI, positron emission tomography (PET), dynamic 3D ultrasound and dynamic computerized tomography yield large amounts of four-dimensional sets. 4D medical data sets are the series of volumetric images netted in time, large in size and demand a great of assets for storage and transmission. Here, in this paper, we present a method wherein 3D image is taken and Discrete Wavelet Transform(DWT) and Dual-Tree Complex Wavelet Transform(DTCWT) techniques are applied separately on it and the image is split into sub-bands. The encoding and decoding are done using 3D-SPIHT, at different bit per pixels(bpp). The reconstructed image is synthesized using Inverse DWT technique. The quality of the compressed image has been evaluated using some factors such as Mean Square Error(MSE) and Peak-Signal to Noise Ratio (PSNR).
Halftoning-based BTC image reconstruction using patch processing with border ...TELKOMNIKA JOURNAL
This paper presents a new halftoning-based block truncation coding (HBTC) image reconstruction using sparse representation framework. The HBTC is a simple yet powerful image compression technique, which can effectively remove the typical blocking effect and false contour. Two types of HBTC methods are discussed in this paper, i.e., ordered dither block truncation coding (ODBTC) and error diffusion block truncation coding (EDBTC). The proposed sparsity-based method suppresses the impulsive noise on ODBTC and EDBTC decoded image with a coupled dictionary containing the HBTC image component and the clean image component dictionaries. Herein, a sparse coefficient is estimated from the HBTC decoded image by means of the HBTC image dictionary. The reconstructed image is subsequently built and aligned from the clean, i.e. non-compressed image dictionary and predicted sparse coefficient. To further reduce the blocking effect, the image patch is firstly identified as “border” and “non-border” type before applying the sparse representation framework. Adding the Laplacian prior knowledge on HBTC decoded image, it yields better reconstructed image quality. The experimental results demonstrate the effectiveness of the proposed HBTC image reconstruction. The proposed method also outperforms the former schemes in terms of reconstructed image quality.
Selection of intra prediction modes for intra frame coding in advanced video ...eSAT Journals
Abstract This paper proposes selection of Intra prediction modes for Intra frame coding in Advanced Video Coding Standard using Matlab. The proposed algorithm selects prediction modes for intra frame coding. There are nine prediction modes are there to predict the intra frame in AVC using Intra prediction,but all the prediction modes are not required for all the applications. Intra prediction is the first process of advanced video coding standard. It predicts a macro block by referring to its previous macro blocks to reduce spatial redundancy,appling all the prediction modes to predict intra frame it leads to more computational complexity is increased at the encoder of AVC. In the proposed algoriyhm, applied all the prediction modes(0-8) for prediction of intra frame but only few modes such as mode0, mode1, mode2,mode4,mode6 gives good PSNR, high comprssion ratio and low bit rate. Out of these modes mode2 gives good PSNR, compression ratio and redced bit rate, mode5, mode7 and mode8 gives lower PSNR, low compression ratio and increased bitrate compared to mode0,mode1, mode2, mode4 and mode6. The simulation results are presented using Matlab. The PSNR , compressed ratio and bit rate achived for different quantization parameters of mother daughter frames , foreman frames was presented. Keywords: AVC, PSNR, CAVLC, Macroblock, Prediction modes.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Super-Spatial Structure Prediction Compression of Medicalijeei-iaes
The demand to preserve raw image data for further processing has been increased with the hasty growth of digital technology. In medical industry the images are generally in the form of sequences which are much correlated. These images are very important and hence lossless compression Technique is required to reduce the number of bits to store these image sequences and take less time to transmit over the network The proposed compression method combines Super-Spatial Structure Prediction with inter-frame coding that includes Motion Estimation and Motion Compensation to achieve higher compression ratio. Motion Estimation and Motion Compensation is made with the fast block-matching process Inverse Diamond Search method. To enhance the compression ratio we propose a new scheme Bose, Chaudhuri and Hocquenghem (BCH). Results are compared in terms of compression ratio and Bits per pixel to the prior arts. Experimental results of our proposed algorithm for medical image sequences achieve 30% more reduction than the other state-of-the-art lossless image compression methods.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A systematic image compression in the combination of linear vector quantisati...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...ijma
ABSTRACT
Image usage over the internet becomes more and more important each day. Over 3 billion images are shared each day over the internet which raise a concern about how to protect images copyrights? Or how to utilize image sharing experience? This paper proposes a new robust image watermarking algorithm based on compressed sensing (CS) and quantization index modulation (QIM) watermark embedding. The algorithm capitalizes on the CS to compress and encrypt images jointly with Entropy Coding, Arnold Cat Map, Pseudo-random numbers and Advanced Encryption Standard (AES). Our proposed algorithm works under the JPEG standard umbrella. Watermark embedding is done in 3 different locations inside the image using QIM. Those locations differ with each 8-by-8 image block. Choosing which combination of coefficients to be used in QIM watermark embedding depends on selecting a combination from combinations table, which is generated at the same time with projection matrices using a 10-digits Pseudorandom number secret key SK1. After quantization phase, the algorithm shuffles image blocks using Arnold’s Cat Map with a 10-digits Pseudo-random number secret key SK2, followed by a unique method for splitting every 8x8 block into two unequal parts. Part number one will act as the host for two QIM watermarks then goes through encoding phase using Run-Length Encoding (RLE) followed by Huffman Encoding, while part number two goes through sparse watermark embedding followed by a third QIM watermark embedding and compression phase using CS, then Huffman encoder is used to encode this part. The algorithm aims to combine image watermarking, compression and encryption capabilities in one algorithm while balancing how those capabilities works with each other to achieve significant improvement in terms of image watermarking, compression and encryption. 15 different images usually used in image processing benchmarking were used for testing the algorithm capabilities and experiments show that our proposed algorithm achieves robust watermarking jointly with encryption and compression under the JPEG standard framework.
Digital image compression is a modern technology which comprises of wide range of use in different fields as in machine learning, medicine, research and many others. Many techniques exist in image processing. This paper aims at the analysis of compression using Discrete Cosine Transform (DCT) by using special methods of coding to produce enhanced results. DCT is a technique or method used to transform pixels of an image into elementary frequency component. It converts each pixel value of an image into its corresponding frequency value. There has to be a formula that has to be used during compression and it should be reversible without losing quality of the image. These formulae are for lossy and lossless compression techniques which are used in this project. The research test Magnetic Resonance Images (MRI) using a set of brain images. During program execution, original image will be inserted and then some algorithms will be performed on the image to compress it and a decompressing algorithm will execute on the compressed file to produce an enhanced lossless image.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
International Journal on Soft Computing ( IJSC )ijsc
A variety of new and powerful algorithms have been developed for image compression over the years.
Among them the wavelet-based image compression schemes have gained much popularity due to their
overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG
compression and multiresolution character which leads to superior energy compaction with high quality
reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding
techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree
(SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding
with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet
Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR)
algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image
Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and
the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and
discussions are presented for algorithm development and implementation.
Reversible Data Hiding Using Contrast Enhancement ApproachCSCJournals
Reverse Data Hiding is a technique used to hide the object's data details. This technique is used to ensure the security and to protect the integrity of the object from any modification by preventing intended and unintended changes. Digital watermarking is a key ingredient to multimedia protection. However, most existing techniques distort the original content as a side effect of image protection. As a way to overcome such distortion, reversible data embedding has recently been introduced and is growing rapidly. In reversible data embedding, the original content can be completely restored after the removal of the watermark. Therefore, it is very practical to protect legal, medical, or other important imagery. In this paper a novel removable (lossless) data hiding technique is proposed. This technique is based on the histogram modification to produce extra space for embedding, and the redundancy in digital images is exploited to achieve a very high embedding capacity. This method has been applied to various standard images. The experimental results have demonstrated a promising outcome and the proposed technique achieved satisfactory and stable performance both on embedding capacity and visual quality. The proposed method capacity is up to 129K bits with PSNR between 42-45dB. The performance is hence better than most exiting reversible data hiding algorithms.
PARALLEL GENERATION OF IMAGE LAYERS CONSTRUCTED BY EDGE DETECTION USING MESSA...ijcsit
Edge detection is one of the most fundamental algorithms in digital image processing. Many algorithms have been implemented to construct image layers extracted from the original image based on selecting threshold parameters. Changing theses parameters to get a high quality layer is time consuming. In this paper, we propose two parallel technique, NASHT1 and NASHT2, to generate multiple layers of an input
image automatically to enable the image tester to select the highest quality detected edges. In addition, the
effect of intensive I/O operations and the number of parallel running processes on the performance of the proposed techniques have also been studied.
Abstract: The increasing amount of applications using digital multimedia technologies has accentuated the need to provide copyright protection to multimedia data. This paper reviews one of the data hiding techniques - digital image watermarking. Through this paper we will explore some basic concepts of digital image watermarking techniques.Two different methods of digital image watermarking namely spatial domain watermarking and transform domain watermarking are briefly discussed in this paper. Furthermore, two different algorithms for a digital image watermarking have also been discussed. Also the comparision between the different algorithms,tests performed for the robustness and the applications of the digital image watermarking have also been discussed.
A new partial image encryption method for document images using variance base...IJECEIAES
The proposed method partially and completely encrypts the gray scale Document images. The complete image encryption is also performed to compare the performance with the existing encryption methods. The partial encryption is carried out by segmenting the image using the Quad-tree decomposition method based on the variance of the image block. The image blocks with uniform pixel levels are considered insignificant blocks and others the significant blocks. The pixels in the significant blocks are permuted by using 1D Skew tent chaotic map. The partially encrypted image blocks are further permuted using 2D Henon map to increase the security level and fed as input to complete encryption. The complete encryption is carried out by diffusing the partially encrypted image. Two levels of diffusion are performed. The first level simply modifies the pixels in the partially encrypted image with the Bernoulli’s chaotic map. The second level establishes the interdependency between rows and columns of the first level diffused image. The experiment is conducted for both partial and complete image encryption on the Document images. The proposed scheme yields better results for both partial and complete encryption on Speed, statistical and dynamical attacks. The results ensure better security when compared to existing encryption schemes.
SELECTIVE ENCRYPTION OF IMAGE BY NUMBER MAZE TECHNIQUEijcisjournal
Due to enormous increase in the usage of computers and mobiles, today’s world is currently flooded with huge volumes of data. This paper is primarily focused on multimedia data and how it can be protected from unwanted attacks. Sharing of multimedia data is easy and very efficient, it has been a customary practice to share multimedia data but there is no proper encryption technique to encrypt multimedia data. Sharing of multimedia data over unprotected networks using DCT algorithm and then applying selective encryption-based algorithm has never been adequately studied. This paper introduces a new selective encryption-based security system which will transfer data with protection even in unauthenticated network. Selective encryption-based security system will also minimize time during encryption process which there by achieves efficiency. The data in the image is transmitted over a network is discriminated using DCT transform and then it will be selectively encrypted using Number Puzzle technique, and thus provides security from unauthorized access. This paper discusses about numeric puzzle-based encryption technique
and how it can achieve security and integrity for multimedia data over traditional encryption technique.
Design secure multi-level communication system based on duffing chaotic map a...IJEECSIAES
Cryptography and steganography are among the most important sciences that have been properly used to keep confidential data from potential spies and hackers. They can be used separately or together. Encryption involves the basic principle of instantaneous conversion of valuable information into a specific form that unauthorized persons will not understand to decrypt it. While steganography is the science of embedding confidential data inside a cover, in a way that cannot be recognized or seen by the human eye. This paper presents a high-resolution chaotic approach applied to images that hide information. A more secure and reliable system is designed to properly include confidential data transmitted through transmission channels. This is done by working the use of encryption and steganography together. This work proposed a new method that achieves a very high level of hidden information based on non-uniform systems by generating a random index vector (RIV) for hidden data within least significant bit (LSB) image pixels. This method prevents the reduction of image quality. The simulation results also show that the peak signal to noise ratio (PSNR) is up to 74.87 dB and the mean square error (MSE) values is up to 0.0828, which sufficiently indicates the effectiveness of the proposed algorithm.
Design secure multi-level communication system based on duffing chaotic map a...nooriasukmaningtyas
Cryptography and steganography are among the most important sciences that have been properly used to keep confidential data from potential spies and hackers. They can be used separately or together. Encryption involves the basic principle of instantaneous conversion of valuable information into a specific form that unauthorized persons will not understand to decrypt it. While steganography is the science of embedding confidential data inside a cover, in a way that cannot be recognized or seen by the human eye. This paper presents a high-resolution chaotic approach applied to images that hide information. A more secure and reliable system is designed to properly include confidential data transmitted through transmission channels. This is done by working the use of encryption and steganography together. This work proposed a new method that achieves a very high level of hidden information based on non-uniform systems by generating a random index vector (RIV) for hidden data within least significant bit (LSB) image pixels. This method prevents the reduction of image quality. The simulation results also show that the peak signal to noise ratio (PSNR) is up to 74.87 dB and the mean square error (MSE) values is up to 0.0828, which sufficiently indicates the effectiveness of the proposed algorithm.
Similar to AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERROR CONCEALMENT (20)
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
block diagram and signal flow graph representation
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERROR CONCEALMENT
1. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
DOI : 10.5121/ijnsa.2010.2312 169
AN EFFICIENT M-ARY QIM DATA HIDING
ALGORITHM FOR THE APPLICATION TO
IMAGE ERROR CONCEALMENT
Amit Phadikar1
, Santi P. Maity 2
1
Department of Information Technology, MCKV Institute of Engineering, Liluah,
Howrah 711204, India.
amitphadikar@rediffmail.com
2
Department of Information Technology, Bengal Engineering and Science University,
Shibpur, Howrah 711 103, India.
santipmaity@it.becs.ac.in
ABSTRACT
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for
image error concealment to produce better image quality are complex in nature and also time consuming.
Moreover, those methods are not suitable for real time error concealment where the decoder may not
have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for
error concealment of digital image. Edge direction information of a block is extracted in the encoder and
is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces
work load of the decoder. The system performance in term of fidelity and computational load is improved
using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental
results duly support the effectiveness of the proposed scheme.
KEY WORDS
Error Concealment, Data Hiding, M-ary modulation, QIM, Interpolation.
1. INTRODUCTION
Rapid growth in computer power, as well as in wireless communication technology and the
advent of the World Wide Web (WWW) make the Internet easily accessible to everyone.
Hence, the use of multimedia signals increases lot during the last few years. At the same time,
the imperfect transmission over wireless channel introduces vast amounts of random and burst
errors leading to severe degradation in quality for the transmitted data due to fading, signal
attenuations, and co-channel interference [1]. As a metre of fact, transmission of block-coded
images through error prone wireless channels often results in lost blocks. Error concealment
(EC) techniques, by exploiting inherent redundancy reduce visual artefacts through post
processing at the decoder.
Several methods have been developed for recovery of lost data in the decoder, showing
different trade-off between complexity and quality. A set of error concealment techniques that
do not use data hiding but provide similar high level of performance have been proposed in the
literature. Those methods are pixel domain interpolation (PDI) [2], directional interpolation (DI)
[3, 4], projection onto convex sets (POCS) [5], block matching [6], neighbourhood regions
partitioned matching [7] or spectral domain like discreet cosine transform (DCT) [8] and
discreet wavelet transform (DWT) [9]. Li and Orchard provide a good review of these
techniques and also propose a set of block-based sequential recovery techniques [10]. The
above decoder side error concealment schemes [2-10] are mainly based on two steps. First, the
2. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
170
decoder estimates some features of lost information like edge direction information or motion
vector. Second, the decoder interpolates lost information from the estimated features in spatial,
transform or temporal domain. These schemes work well in simplified loss scenarios where
successfully received data are assumed to be reconstructed as loss-free. However, this is most
often not the case depicting the real scenario. Moreover, the first step suffers from higher
computational complexity and also takes large time. In literature, it is reported that the first step
takes roughly 30% of total computation load. As a result, those schemes are not suitable for real
time error concealment where the decoders may not have sufficient computation power or done
in online.
To overcome those problems Liu and Li [11] are the first authors who use data hiding as an
error concealment tool. The scheme extracts the important information of an image, like the DC
components of each (8×8) block, and embeds it into the host image to achieve the goal.
Adsumilli et al. [12] propose a robust image error concealment algorithm using watermarking.
The scheme embeds a low-resolution version of each image or video frame into itself using
spread-spectrum modulation. Authors analyze various advantages and disadvantages of the full
frame discrete cosine transform (DCT) against the conventional block-based DCT embedding.
Gur et al. [13] investigate an error concealment technique of image using data hiding. The
scheme embeds macro block-based, best neighbourhood matching (BNM) information into the
original image in DWT domain. The experimental results indicate that the above error
concealment technique is a promising one, especially for the erroneous channels causing a wider
range of packet losses. Recently, Phadikar et al. [14] propose an image error concealment
scheme using dual-tree complex wavelet transform (DTCWT) and quantization index
modulation (QIM). The experimental results indicate that the inherent redundancy in DTCWT
provides better error concealment over traditional DWT with an increase in complexity due to
DTCWT. Yýlmaz et al. [15] and Yin et al. [16] propose a data hiding that embeds edge direction
information into the host image. So the decoder needs only to perform the second step using the
features (edge direction information) that are already extracted by the encoder. In addition,
performing feature extraction at the encoder is more effective as the encoder usually have access
to more information of the data. Though the complexity of the encoder is increased, the encoder
usually has more computational resources and often can perform the tasks off-line. The main
drawback of those schemes [15, 16] is fragile in nature, as those schemes use even-odd
embedding. As a result, the extracted edge information may be wrong due to slight signal
modification. In our scheme, we overcome this problem using M-ary data modulation based on
near-orthogonal QIM.
This paper proposes an error concealment scheme for digital images using data hiding. The
goal is achieved by embedding some important information like edge direction information of
each block extracted from the original image itself. The edge information is embedded in the
DCT domain. We have select DCT domain as in reality 80% images are till JPEG compressed
form. The advantage of using DCT for data hiding based error concealment is the extensive
study of human visual system (HVS) in this domain, which has resulted in the standard JPEG
quantization table. As a result, the scheme can be more adaptive to HVS. The major
contributions of this paper are two fold. First, a QIM based data hiding is used to embed edge
direction information of a block at the encoder. This helps decoder to extract correct edge
direction information in an easy way and also in less time. In other words, this decreases the
workload at the decoder. Moreover, QIM is selected as it provides considerable performance
advantages over spread-spectrum and low-bit(s) modulation in terms of the achievable
performance trade-offs among distortion, rate, and robustness of the embedding [17]. Second,
the system performance in terms of fidelity and robustness is improved using M-ary data
modulation that is implemented using near-orthogonal QIM. To the best of our knowledge M-
ary QIM has not yet been explored by watermarking research community in image error
concealment applications.
The rest of the paper is organized as follows: Problem formation is outlined in Section 2.
Basic principles and key features of M-ary near-orthogonal QIM are outlined in Section 3. The
3. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
171
proposed error concealment scheme is introduced in Section 4. Some experimental results are
presented in Section 5. The paper is concluded in Section 6.
2. PROBLEM FORMATIONS
In an image, the nearby pixel values are highly correlated that makes it possible to recover the
information of the lost block by spatial-domain interpolation. In this section, we discuss the
interpolation technique for the recovery of both smooth and edge blocks.
2.1. Smooth Block Concealment
If a lost block is a non-edge block, each pixel of a lost block is calculated as a linear
combination of the nearest pixels in the block boundaries. In this method, pixel values of the
lost block are interpolated by the following formula [3]:
( ,1) ( , ) (1, ) ( , )
( , )
d p c d p c S d p i d p S rR L L R B T T Bp r c
d d d dL R T B
+ + +
=
+ + +
(1)
where ( , )p r c is the interpolated pixel value, and ( , )p r c
x
represents values of the nearest pixels
of the four neighbouring blocks like top (T), bottom (B), left (L), right (R). The symbol xd , is
the distance between ( , )p r c and ( , )p r c
x
. We have used p (l, 1) to represent the left bottom
corner of an (S×S) macro block. The first and second coordinate presents the horizontal and
vertical direction, respectively.
2.2. Edge Block Concealment
Since edge integrity plays an important role in human visual perception, we can utilize spatially
correlated information more thoroughly by performing interpolation, based on edge direction in
the local neighbourhood of the lost block. Then the recovery of lost block consists of two steps
i.e. finding edge direction information and then edge directed interpolation.
2.2.1. Classification of Edge Direction by Gradient Voting
An efficient and widely used method for edge detection is calculating the gradient field by
means of the mask. In the present scheme, for simplicity we only consider four types of edges
and one block contains only one type of edge. Let R1, R2, R3, and R4 denote the responses of
the mask in Fig. 1, from left to right, where R’s are given by Eq. 2. Suppose that the four masks
are run individually through an image. If at a certain point in the image, R Ri j> for all i j≠ ,
the corresponding point is said to be more likely associated with a line in the mask i. For
example, if a point in the image, R Ri j> for j=2, 3, 4 that particular point is said to be more
likely associated with a horizontal line [18].
...1 1 2 2 9 9R w z w z w z= + + + (2)
where z
i
is the gray value of the pixel associated with mask coefficients w
i
.
Figure 1. Edge detection mask: (a) Horizontal, (b) + 450,
(c) vertical, (d) -450
.
4. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
172
2.2.2. Edge Directed Interpolation
For the selected direction, a series of one-dimensional linear interpolations are carried out to
obtain the pixel values of the lost block. Interpolated pixel values within the lost block are
calculated by the following simple formula [3]:
1 2 2 1( , )
1 2
d p d p
p r c
d d
+
=
+
(3)
where 1p and 2p are the points in boundaries from which the missing pixel is interpolated.
Symbols 1d and 2d denote the distance of ( , )p r c from 1p and 2p , respectively.
3. BASIC PRINCIPLES AND KEY FEATURES OF M-ARY QIM
The basic quantization index modulation (QIM) algorithm quantizes feature vector X , using a
quantizer ( ).Q∆ which chosen from a family of quantizer based on the message bit ( m ) to be
embedded [17]. The watermarked featured vector X% is then obtained as follows:
( )( ) ( )X Q X d m d m= + −
∆
% ; { }0,1m∈ (4)
where ∆ is a fixed quantization step size, ( ).d is the dither used for embedding watermark bit.
At the decoder, the test signal X% is requantized using the same family of quantizer to determine
the embedded message bit i.e.
{ }
( )ˆ arg min - ( ) - ( )
0,1 1
L
m X Q X d m d m
m i
= +∑
∆∈ =
% % (5)
where we have an assumption that the scheme uses a soft decoder and the symbol ‘L’ is the
length of the dither. The performance of the traditional QIM can be further improved by M-ary
modulation. We have modified the traditional QIM [17] to make it an M-ary modulation and is
described in the next subsection.
3.1. M-ary QIM
In M-ary modulation, a group of symbols are treated as a single entity. We can relate the
number of bits (N) and the number of different symbols (M) with the following equation [19]:
NM = 2 (6)
Obviously, the binary transmission system is a special case of an M-ary data transmission
system. An effective method to implement an M-ary modulation technique is based on near-
orthogonal dither. A group of near-orthogonal dither i 0 1 M -1d {d ,d ,...,d }∈ are generated
independently using Hadamard function of standard math library and random number generated
using a secret key (k). The smallest possible Hadamard matrix is of order two and is defined as:
1 1
2 1 1
H =
−
(7)
The Hadamard matrix with an order of power in two may be constructed from H2 based on
Kronecker product and is given by:
/ 2 / 2
2 / 2
/ 2 / 2
H H
L LH H H
L L H H
L L
= ⊗ =
−
(8)
5. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
173
The basic characteristic of the Hadamard matrix is that all rows and column are orthogonal. The
sequence of random number is an i.i.d., following Gaussian distribution (0,1)η . The
generation of dither can be defined as follows:
( ). / 2d H R
i
= × × ∆ (9)
where ( ).H is the Hadamard function to generate orthogonal code, ∆ is the step size for dither
modulation and R is a random number that is generated using following rule:
[ ( ) ( ) ( )]/
1 2
R k k k n
L L L n
= ℜ + ℜ + + ℜL (10)
The function ( )kℜ is the random number generator. The symbol R is multiplied with ( ).H to
increase security of the system.
One of the prominent characteristic of dither generated in this manner is their near-
orthogonal property [19] i.e.
• id for i=1, 2…, M should be distinct sequences.
• The spatial correlation , ,i jd d i j≠ should be minimum. Ideally, sequence id and jd
should be orthogonal when i j≠ .
Those properties are illustrated by Fig. 2, where as an example d is a set of dithers with 256
elements and the correlations of 50d with all other dither in the set are shown.
Figure 2. Linear correlation between a dither and its near-orthogonal version.
If each dither ‘ d ‘ in the group is used to represent an M-ary message symbol
{ }0,1,..., 1m M∈ − , it consists of 2log M N= bits of information once chosen for data
embedding. With a QIM embedding function, the message ‘ m ’ can be embedded into the
feature vector ( X ) according to the following rule:
( )( ) ( )X Q X d m d m= + −
∆
% ; { }0,1,..., 1m M∈ − (11)
For watermark decoding, the same set of dither 0 1 M -1d {d ,d ,...,d }∈ is used. Fig. 3 shows the
block diagram of watermark decoding in M-ary system. At the decoder, the test signal X% is
requantized using the family of quantizer ‘ id ’ to determine the embedded message bit i.e.
6. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
174
{ }
( )ˆ arg min - ( ) - ( )
0,1,.., -1 1
L
m X Q X d m d m
m M i
= +∑
∆∈ =
% % (12)
Figure 3. Structure of decoder for the extraction of M-ary watermark.
Figure 4. Bit error rate (BER) performance for different noise power. Number of signal point:
65536, watermark power: 9.2dB dB, capacity: 4096.
In the case of a conventional M-ary decoder illustrated in Fig. 3, the total number of
operations required for the decoding of an M-ary symbol is approximately,
T0 = LM (13)
where L is the length of the feature vector. One operation is defined as one real division, one
real addition plus one real subtraction. Apparently, T0 is a linear function of M. So for larges
value of M>4 the computational workload for data extraction advances exponentially i.e. O (2N
).
In other words, the greater value of M, the better system performance in term of robustness for a
fixed capacity, with the increase in decoding complexity. Fortunately, M-ary modulation does
not increase any computation complexity in our scheme. In our scheme, the edge information is
represented by five symbols. As a result, the proposed scheme does five correlation calculation
for detecting a symbol, while traditional QIM needs six correlation calculation for detecting the
7. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
175
edge type. Fig. 4 highlights robustness performance in term of bit error rate (BER) for different
M values. The word BER here implies the error in the extracted watermark signal when the
watermarked signal is transmitted through the noisy channel or manipulation is done by some
user. The BER is calculated through simulation by transmitting image data through noisy
channel. It is seen that even if the noise power reaches to -5 dB there is no BER even for large
M values. Each point on the curves is obtained as the average value of 100 independent
experimentations conducted over large number of benchmark images.
4. PROPOSED ERROR CONCEALMENT TECHNIQUE
The overall error concealment scheme consists of two stages: i.e. image encoding and image
decoding process. The detailed block diagram representations of encoding and decoding process
are shown in Fig. 5 and Fig. 6, respectively.
Figure 5. Block diagram of error concealment encoding process: Encoder.
Figure 6. Block diagram of error concealment decoding process: Decoder.
4.1. Image Encoding Process
The objective of image encoding process is the extraction of edge information followed by
embedding of that edge information at the encoder using M-ary modulation. The inputs to the
encoding process are the gray scale image and secret keys for the generation of near-orthogonal
dithers that are used for watermark bits embedding. The image encoding process consists of the
following steps:
Step 1:Block Based Partitioning of Image: The original (host) image (H) is partitioned into non-
overlapping blocks of size ( S S× ). We select S=8 for watermark insertion, as the effect of
JPEG compression affects independently the each bit embedded in the block.
Step 2: Extraction of Edge Information: Various edge directional operators/masks are applied
on each block and the blocks are categorized as non-edge or edge block depending on the
response of the operators. If a block contains an edge, the type of edge direction is determined
as described in Section 2.2.1 [18].
Step 3: Representation of Edge Information: The edge information of each block is represented
by ‘m’ bits symbol depending on the type of edge it contains. Table 1 shows a typical such
representation when types of various directions of edges are considered to be of four types. The
8. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
176
MSB (most significant bit) represents whether the block contains an edge or not. If the MSB is
zero, it then signifies that the block contains no edge. The next two bits represent the type of
edge direction. Table 1 shows a set of five symbols by which respective edge information of a
block is represented. In other words, five dithers are sufficient for embedding edge information
of a block using M-ary modulation.
Table 1. Symbolic representation of edge information for a block.
Bit 2nd
Bit 1st
Bit 0th
Significance (Edge Type)
0 0 0 No edge
1 0 0 Horizontal
1 0 1 +450
1 1 0 Vertical
1 1 1 -450
Step 4: Image Transformation: The 2-dimensional DCT of each cover image block is
calculated.
Step 5: Watermark Embedding: The edge orientation information of each block is represented
by a symbol (m) as described in Step 3 and the information/message (m) is embedded as
watermark into another block by modulating DCT coefficients based on M-ary modulation. The
different steps for data embedding are described as follows:
A): Generation of near-Orthogonal Dither:
Depending on the number of symbols (M), a group of near-orthogonal dithers
i 0 1 M -1d {d ,d ,...,d }∈ are generated independently using Hadamard function of standard math
library and a secret key k as described in Eq. (9). The length of id is equal to the number of
coefficients that are used for embedding watermark bit in a block. In our scheme, one symbol
(m) is represented by 3 bits and the length of id is of 63, as one (8×8) block contains 63 AC
coefficients. The scheme leaves DC coefficients unchanged at the time of data embedding, as
use of DC component for watermark embedding reduces image fidelity of the watermarked
image.
B): Watermark bit Insertion: The message ( m ) is embedded in a block by modulating AC
coefficients. The qth
watermarked AC coefficient q
S is obtained as follows:
{ }( ) ( )
q q
S Q X d m d m= + −∆
; { }0,1,..., 1m M∈ − (14)
where q
X is the original qth
AC coefficient, Q is a uniform quantizer (and dequantizer) with
step ∆ . The symbol ( )d m is the dither that is used to represent a message/symbol (m) for
watermark embedding. As an alternative of M-ary modulation, one can also embeds 3 bits in an
(8×8) block by dividing the AC coefficients into three non-overlapping sets, where one set can
be used for embedding one watermark bit. But, this alternative data embedding process will
make the scheme more fragile in nature. So, M-ary modulation is the appropriate choice for
embedding 3 bits in a block. After watermark embedding, the inverse DCT is performed and the
watermarked image is obtained.
4.2. Image Decoding Process
The proposed error concealment scheme is blind in the sense that the decoding process does not
require the original/host image. In exigency of some blocks of the received signal ( H′) that are
lost during transmission, are concealed at the receiver using different steps as described follows.
9. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
177
Step 1: Steps 1, 4, and 5(A) of image encoding process are performed, over the received or
possibly damaged image. The same step size ( ∆ ) is used at the time of decoding that was used
at the time of encoding. In order to achieve successful error concealment, the exact location of
the error, i.e. damaged blocks, should be detected using some sophisticated error detection
mechanisms. In our scheme the differences of pixel values between two neighbour lines are
used for detecting transmission errors [20].
Step 2: Watermark Extraction: The edge orientation information of a lost block is extracted
from the block that is used at the time of embedding. The message (m) is extracted from the
received DCT coefficients of a block using the following rule.
{ }
( )ˆ
L
m 0,1,..,M -1
i=1
m= arg min X -Q X +d(m) - d(m)∆
∈
∑ % % ; (15)
The extracted watermark information ˆm represents the information about the edge direction of
a block. It is noted that though our scheme uses M-ary modulation, the decoding complexity till
less than the traditional QIM. Using traditional QIM one has to do six correlations calculations
(for one watermark bit two dithers are needed. So for three watermark bits it calculates six
correlations, as edge type of a block is represented by 3 bits) for detecting edge type. In M-ary
modulation, it only takes five correlations as five symbols are sufficient to represent the edge
type of a block.
Step 3: Recovery of Lost Blocks (Error Concealment): If a block is lost, the extracted edge
direction information ( ˆm ) for that block is then used for the concealment purpose. If the lost
block is a smooth block, recovery is done as described in Section 2.1. On the other hand, if the
lost block is an edge block, the recovery is done by edge directed interpolation as described in
Section 2.2.2.
5. PERFORMANCE EVALUATIONS
This section presents simulation results to show performance of the proposed error concealment
technique. The efficiency of the proposed technique is evaluated over a large number of
benchmark images [21] including eight popular test images: Lena, Fishing boat, Pepper,
Cameraman, Baboon, Opera, Pueblo bonito, F16 shown in Fig. 7. The image database contains
29 test images having varied image characterises. All of the test images are 8 bit/pixel gray
scale images with size (512×512). The value of step size (∆) taken into consideration is 8 (i.e.
watermark power (WP) of 7.27dB). In this study, we use peak-signal-to-noise-ratio (PSNR dB)
and mean-structure-similarity-index-measure (MSSIM) [22] as the objective measures of the
watermarked image quality. PSNR (dB) is defined as follows:
2255
( ) 10log
10 1 2
( , ) ( , )
1 1
PSNR in dB
X Y
P r c P r c
X Y r c
=
−∑ ∑
× = =
%
(16)
where ( , )P r c represents the pixel value at coordinate (r, c) in the original undistorted image,
and ( , )P r c% represents the pixel value in the watermarked (stego) image. X and Y are the
number of rows and columns, respectively. MSSIM is defined as follows:
1( , ) ( , )
1
M
MSSIM P P SSIM P P
j jM
j
′
= ∑
′ ′′
′ =
(17)
where
( , ) [ ( , )] .[ ( , )] .[ ( , )]SSIM P P l P P c P P s P P
β γδ= (18)
10. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
The function ( , )l P P , ( , )c P P and
and structure comparison function, respectively. The symbol
parameters used to adjust the relative importance of the components. Note that
0 ( , ) 1MSSIM P P≤ ≤ , and MSSIM P P
jP′ are the image contents at the
samples in the quality map.
(a) (b) (c) (d)
(e) (f) (g) (h)
Figure 7. Test images; (a): Lena, (b): Fishing boat,
(f): Opera, (g): Pueblo bonito, (h): F16.
In the first set of experiments, we have studied the performance of the proposed scheme in
term of fidelity of the watermarked image.
values of various watermarked images after embedding edge information to the r
images. Fig. 8 shows the watermarked image visually. The length of the watermark is 12288
bits that are used to represent the edge information of all (8×8) blocks of an image of size
(512×512). From Table 2 it is clear that the quality of th
(dB) is almost same for various test images. It is also interesting to see from the results of Table
2 that though for a given watermark power the visual quality in PSNR (dB) for the watermarked
image does not vary much but high value of structural similarity is found for the image having
high texture image like Baboon and Pueblo bonito.
Table 2. PSNR (dB) and MSSIM of the watermarked image.
Images Lena
PSNR (dB) 40.88
MSSIM 096
Baboon
PSNR (dB) 40.93
MSSIM 0.98
International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
( , )and ( , )s P P are the luminance comparison, contrast comparison
and structure comparison function, respectively. The symbol 0δ > , 0β > and γ
parameters used to adjust the relative importance of the components. Note that
( , ) 1MSSIM P P = corresponds to no distortion. The symbols
are the image contents at the j′ - th local window, i.e. the block, and M ′ is the number of
(a) (b) (c) (d)
(e) (f) (g) (h)
Lena, (b): Fishing boat, (c): Pepper, (d): Cameraman, (e): Baboon,
(f): Opera, (g): Pueblo bonito, (h): F16.
In the first set of experiments, we have studied the performance of the proposed scheme in
term of fidelity of the watermarked image. Table 2 illustrates the PSNR (dB) and MSSIM
values of various watermarked images after embedding edge information to the respective host
images. Fig. 8 shows the watermarked image visually. The length of the watermark is 12288
bits that are used to represent the edge information of all (8×8) blocks of an image of size
(512×512). From Table 2 it is clear that the quality of the watermarked image in term of PSNR
(dB) is almost same for various test images. It is also interesting to see from the results of Table
2 that though for a given watermark power the visual quality in PSNR (dB) for the watermarked
but high value of structural similarity is found for the image having
high texture image like Baboon and Pueblo bonito.
Table 2. PSNR (dB) and MSSIM of the watermarked image.
Lena Fishing boat Pepper Cameraman
40.88 40.66 40.88 40.77
096 0.97 0.96 0.95
Baboon Opera Pueblo bonito F16
40.93 40.91 40.87 40.80
0.98 0.97 0.98 0.95
International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
178
are the luminance comparison, contrast comparison
0γ > are the
parameters used to adjust the relative importance of the components. Note that
ion. The symbols jP′ and
is the number of
(a) (b) (c) (d)
(e) (f) (g) (h)
(c): Pepper, (d): Cameraman, (e): Baboon,
In the first set of experiments, we have studied the performance of the proposed scheme in
Table 2 illustrates the PSNR (dB) and MSSIM
espective host
images. Fig. 8 shows the watermarked image visually. The length of the watermark is 12288
bits that are used to represent the edge information of all (8×8) blocks of an image of size
e watermarked image in term of PSNR
(dB) is almost same for various test images. It is also interesting to see from the results of Table
2 that though for a given watermark power the visual quality in PSNR (dB) for the watermarked
but high value of structural similarity is found for the image having
11. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
(P=40.88, M=0.96) (P=40.66, M=0.97) (P=40.88, M=0.96) (P=40.77, M=0.95)
(a) (b) (c) (d)
(P=40.93, M=0.98) (P=40.91, M=0.97) (P
(e) (f) (g) (h)
Figure 8. Watermarked images (a):
Baboon, (f): Opera, (g): Pueblo bonito, (h): F16.
PSNR (in dB) and MSSIM values of the image.
In the second set of experiments, we have studied the performance of the proposed error
concealment scheme under different size of lost blocks like (8×8). The lost blocks are simulated
in this experimentation by randomly selecting (8×8) blocks and substit
values to zero (0). The error blocks are both random and consecutive in nature. The number of
lost blocks varies from 50 to 200. Performance results of the proposed error concealment
scheme are given in Table 3. Figs. 9(a)
lost blocks of size (8×8). Figs. 10(a)
of Figs. 9(a)-9(d), and Figs. 11(a)
number of lost blocks reaches 200, the scheme till can conceals the error effectively. It is
already mentioned that we have performed our experiment over large number of images,
however, for demonstration purpose we present here result for eight images with vari
characteristics. The numerical values shown in Table 3 indicate that improvement in PSNR (dB)
values due to the proposed error concealment method are, on an average, 12 dB, 13 dB, 13.5dB
and 14 dB for the number of lost blocks 50, 100, 150, and 20
values, in other words, highlight that the proposed error concealment scheme is quite effective
even for the large number of lost blocks. Fig. 13 and Fig. 14 show the relative improvement in
quality after error concealment i
Graphical representations reflect the fact that if the image contains high texture regions (like
Babon, Pueblo bonito) the error concealment using interpolation, produces poor performance
than the images that contain low texture regions likes Lena, Pepper. Figs. 13 and 14 also show
that relatively high degree of improvement in quality i.e. PSNR (dB) and MSSIM values are
possible to achieve through error concealment, when number of lost blocks is large in n
It is reasonable as less number of lost blocks offers better quality of the received image signal,
which in turn does not provide scope of further improvement through error concealment.
Moreover, from Figs. 10(c)-10(d) and Figs. 12(c)
has a complex geometric structure, such as corner or streaks, our result is not as good at the
corner location, as we only consider one edge type in a block.
International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
(P=40.88, M=0.96) (P=40.66, M=0.97) (P=40.88, M=0.96) (P=40.77, M=0.95)
(a) (b) (c) (d)
(P=40.93, M=0.98) (P=40.91, M=0.97) (P=40.87, M=0.98) (P=40.80, M=0.95)
(e) (f) (g) (h)
Figure 8. Watermarked images (a): Lena, (b): Fishing boat, (c): Pepper, (d): Cameraman, (e):
Baboon, (f): Opera, (g): Pueblo bonito, (h): F16. (P, M) above each image represents the
PSNR (in dB) and MSSIM values of the image.
In the second set of experiments, we have studied the performance of the proposed error
concealment scheme under different size of lost blocks like (8×8). The lost blocks are simulated
in this experimentation by randomly selecting (8×8) blocks and substituting the original pixel
values to zero (0). The error blocks are both random and consecutive in nature. The number of
lost blocks varies from 50 to 200. Performance results of the proposed error concealment
scheme are given in Table 3. Figs. 9(a)-9(d) and Figs. 11(a)-11(d) show the error images having
lost blocks of size (8×8). Figs. 10(a)-10(d) and Figs. 12(a)-12(d) are the error concealed images
9(d), and Figs. 11(a)-11(d), respectively. From Figs. 9-12, it is clear that even if the
of lost blocks reaches 200, the scheme till can conceals the error effectively. It is
already mentioned that we have performed our experiment over large number of images,
however, for demonstration purpose we present here result for eight images with vari
characteristics. The numerical values shown in Table 3 indicate that improvement in PSNR (dB)
values due to the proposed error concealment method are, on an average, 12 dB, 13 dB, 13.5dB
and 14 dB for the number of lost blocks 50, 100, 150, and 200, respectively. The numerical
values, in other words, highlight that the proposed error concealment scheme is quite effective
even for the large number of lost blocks. Fig. 13 and Fig. 14 show the relative improvement in
quality after error concealment in term of PSNR (dB) and MSSIM for different images.
Graphical representations reflect the fact that if the image contains high texture regions (like
Babon, Pueblo bonito) the error concealment using interpolation, produces poor performance
that contain low texture regions likes Lena, Pepper. Figs. 13 and 14 also show
that relatively high degree of improvement in quality i.e. PSNR (dB) and MSSIM values are
possible to achieve through error concealment, when number of lost blocks is large in n
It is reasonable as less number of lost blocks offers better quality of the received image signal,
which in turn does not provide scope of further improvement through error concealment.
10(d) and Figs. 12(c)-12(d), it also clear that when the lost block
has a complex geometric structure, such as corner or streaks, our result is not as good at the
corner location, as we only consider one edge type in a block.
International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
179
(P=40.88, M=0.96) (P=40.66, M=0.97) (P=40.88, M=0.96) (P=40.77, M=0.95)
(a) (b) (c) (d)
=40.87, M=0.98) (P=40.80, M=0.95)
(c): Pepper, (d): Cameraman, (e):
(P, M) above each image represents the
In the second set of experiments, we have studied the performance of the proposed error
concealment scheme under different size of lost blocks like (8×8). The lost blocks are simulated
uting the original pixel
values to zero (0). The error blocks are both random and consecutive in nature. The number of
lost blocks varies from 50 to 200. Performance results of the proposed error concealment
11(d) show the error images having
12(d) are the error concealed images
12, it is clear that even if the
of lost blocks reaches 200, the scheme till can conceals the error effectively. It is
already mentioned that we have performed our experiment over large number of images,
however, for demonstration purpose we present here result for eight images with varied image
characteristics. The numerical values shown in Table 3 indicate that improvement in PSNR (dB)
values due to the proposed error concealment method are, on an average, 12 dB, 13 dB, 13.5dB
0, respectively. The numerical
values, in other words, highlight that the proposed error concealment scheme is quite effective
even for the large number of lost blocks. Fig. 13 and Fig. 14 show the relative improvement in
n term of PSNR (dB) and MSSIM for different images.
Graphical representations reflect the fact that if the image contains high texture regions (like
Babon, Pueblo bonito) the error concealment using interpolation, produces poor performance
that contain low texture regions likes Lena, Pepper. Figs. 13 and 14 also show
that relatively high degree of improvement in quality i.e. PSNR (dB) and MSSIM values are
possible to achieve through error concealment, when number of lost blocks is large in number.
It is reasonable as less number of lost blocks offers better quality of the received image signal,
which in turn does not provide scope of further improvement through error concealment.
clear that when the lost block
has a complex geometric structure, such as corner or streaks, our result is not as good at the
13. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
(P=37.68, M= 0.95) (P=35.37, M=0.95) (P=34.10, M=0.94) (P=33.15, M=0.94 )
(a) (b)
Figure 10. Error concealed image
(P=25.36, M=0.96) (P=22.26, M=
(a) (b)
Figure 11. Error image Baboon: (a) 50
(P=35.55, M=0.98) (P=33.37
(a)
Figure 12. Error concealed image
The experimental results shown in Fig. 13 and Fig. 14 also suggest that the proposed error
concealment scheme is quite effective for slow fading as well as for the wireless channel
suffered by deep fades. In our scheme, less number of lost blocks is considered as the channel i
under low fades, while large number of lost blocks is considered as a deep fades. The algorithm
developed can be made use to design an adaptive transmission scheme through the estimation of
channel parameters (embedded edge information that acts as pilot
bandwidth and synchronization problem like the existing pilot based system) as well as image
and video signal transmission over deeply faded channel. The quality of the extracted edge
information will indicate the current status o
error concealed image goes below the acceptable level, feedback information from the receiver
to the transmitter to be sent indicating that the channel is under deep fade and current data rate is
International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
(P=37.68, M= 0.95) (P=35.37, M=0.95) (P=34.10, M=0.94) (P=33.15, M=0.94 )
(a) (b) (c) (d)
Figure 10. Error concealed image Pepper: (a) 50 loss blocks; (b) 100 loss blocks; (c) 150
blocks; (d) 200 loss blocks.
(P=22.26, M=0.94) ( P=20.18, M=0.91) (P=18.82, M=
(a) (b) (c) (d)
: (a) 50 lost blocks; (b) 100 lost blocks; (c) 150 lost blocks;
200 lost blocks.
33.37, M=0.97) (P=31.96, M=0.96) (P=30.64, M=
(b) (c) (d)
Figure 12. Error concealed image Baboon: (a) 50 lost blocks; (b) 100 lost blocks; (c) 150
blocks; (d) 200 lost blocks.
lts shown in Fig. 13 and Fig. 14 also suggest that the proposed error
concealment scheme is quite effective for slow fading as well as for the wireless channel
suffered by deep fades. In our scheme, less number of lost blocks is considered as the channel i
under low fades, while large number of lost blocks is considered as a deep fades. The algorithm
developed can be made use to design an adaptive transmission scheme through the estimation of
channel parameters (embedded edge information that acts as pilot signal without effecting
bandwidth and synchronization problem like the existing pilot based system) as well as image
and video signal transmission over deeply faded channel. The quality of the extracted edge
information will indicate the current status of the channel condition. When the quality of the
error concealed image goes below the acceptable level, feedback information from the receiver
to the transmitter to be sent indicating that the channel is under deep fade and current data rate is
International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
181
(P=37.68, M= 0.95) (P=35.37, M=0.95) (P=34.10, M=0.94) (P=33.15, M=0.94 )
; (c) 150 loss
(P=18.82, M=0.88)
lost blocks; (d)
, M=0.95)
(b) (c) (d)
; (c) 150 lost
lts shown in Fig. 13 and Fig. 14 also suggest that the proposed error
concealment scheme is quite effective for slow fading as well as for the wireless channel
suffered by deep fades. In our scheme, less number of lost blocks is considered as the channel is
under low fades, while large number of lost blocks is considered as a deep fades. The algorithm
developed can be made use to design an adaptive transmission scheme through the estimation of
signal without effecting
bandwidth and synchronization problem like the existing pilot based system) as well as image
and video signal transmission over deeply faded channel. The quality of the extracted edge
f the channel condition. When the quality of the
error concealed image goes below the acceptable level, feedback information from the receiver
to the transmitter to be sent indicating that the channel is under deep fade and current data rate is
14. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
182
not well suited. The data transmission rate may be lowered for sometimes so long the quality of
the error concealed image goes above the acceptable threshold level.
Figure 13. Improvement in quality after error concealment in term of PSNR (dB).
Figure 14. Improvement in quality after error concealment in term of MSSIM.
In the third set of experiments, we have studied the effect of different watermark power
(WP) on the relative gain/loss in performance for error concealment. The watermark power is
defined as [23]
2
10 log10 12
WP
∆
= dB (19)
15. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
183
where the symbol ∆ corresponds to the step size used for watermarking. Table 4 shows the
relative visual quality of the watermarked images for different watermark power obtained
through different step sizes. The corresponding MSSIM values are also reported in order to have
an idea about the loss in structural information due to the variation of watermark powers. Fig.
15 and Fig. 16 show the relative improvement in quality after error concealment with respect to
PSNR (dB) and MSSIM for different watermark power. Graphical representations reflect the
fact that when watermark embedding power is increased, the improvement in both PSNR(dB)
and MSSIM values are decreased. It is also interesting to see from the results of Fig. 15 and 16
that although for a given watermark power, the relative improvement in visual quality in PSNR
(dB) does not vary much for the number of loss blocks varying from 50 to 200 but significant
improvement in structural information is found with the loss of more number of blocks.
Figure 15. Improvement in PSNR (dB) after concealment of lost block for different WP.
Figure 16. Improvement in MSSIM after error concealment of lost block for different WP.
16. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
184
Table 4. Average PSNR (dB) and MSSIM values for watermarked images with different
watermark powers.
WP(dB) 1.25 7.27 10.79
Step size(∆) 4 8 12
PSNR (dB) 46.94 40.88 37.28
MSSIM 0.99 0.96 0.92
Table 5. NCC values for various image processing operations and geometric attacks.
Image
processing
operations
JPEG
95
JPEG
85
JPEG
75
JPEG
70
Mean
Filtering
(3×3)
Median
Filtering
(3×3)
Down
& Up
Sampling
(0.75)
Rotation
(0.25)
Without
M-ary
QIM
0.91 0.73 0.61 0.53 0.82 0.86 0.93 1.00
M-ary
QIM
0.99 0.99 0.99 0.99 0.95 0.96 0.98 1.00
In the fourth set of experiments, we have studied the robustness performance of the proposed
scheme for common image and signal processing operations and geometric attacks like slight
rotation available in benchmark StirMark 4 [24, 25, 26], since QIM-based scheme is usually
bristle for those operations. For image scaling operation, before watermark extraction, the
attacked image is rescaled to the original size. For rotation operation, we have estimated the
rotation angle of the watermarked images by control point selection method with the help of the
original images. The rotated watermarked images are then inverse rotated and corrected by
linear interpolation. Now, those corrected watermarked images are used for watermark
detection. This is done to compensate the effect of lost in data due to the rotation operation.
Table 5 shows the performance improvement in term of normalized cross-correlation (NCC)
[27] if three watermark bits (that are used to represent the edge information) are inserted in a
(8×8) blocks using traditional QIM and M-ary QIM. It is seen that M-ary modulation offers
higher robustness over traditional QIM that ultimately helps in extracting efficient edge
information for the concealment purpose.
The computational complexity of the decoder in terms of required number of additions,
subtraction, division and multiplications is calculated and is also compared with other methods
[4, 28]. Table 6 summarizes the comparison results for restoring 64 pixel values i.e. one (8×8)
block. Generally, edge-based methods [4] are computationally intensive since they require
estimation of edge directions. On the other hand, the scheme proposed in [28] returns high
complexity as the scheme is based on solving a set of 8 linear equations with 8 unknowns. The
result in Table 6 highlights the superiority of our method over [4, 28].
Table 6. Computational complexity of various error concealment methods.
Method Addition Subtraction Multiplication Division Total
Method [28] 1720 - 1920 - 3640
Method [4] 8256 4096 12,352
Proposed Method 315 315 - 315 945
17. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
185
Table 7. Comparison between the proposed and reference methods.
PLR 3% 5% 10% 15% 20% 25% 30%
Lee et al. (2004) [33] 31.12 26.22 25.04 24.14 23.16 - -
Lie et al. (2005) [31] 35.62 34.98 33.34 32.43 31.35 - -
Gur et al. (2005) [13] 28.91 27.72 26.10 25.21 24.43 23.98 23.85
Chen et al. (2006)[2] - 28.72 26.12 24.47 - - -
Lin et al. (2008) [29] - 40.91 36.87 35.02 34.08 33.09 31.72
Anhari et al. (2008) [30] 37.45 36.12 34.82 34.40 31.90 30.91 30.13
Wu et al. (2008) [32] 34.07 28.43 27.11 25.16 24.98 - -
Ma et al. (2009) [4] 35.51 34.00 28.12 26.57 24.10 - -
Yýlmaz et al. [15] 38.12 35.17 33.53 27.17 28.14 29.65 29.24
Yin et al. [16] 39.81 38.54 36.15 32.10 33.57 32.32 32.10
Proposed 39.95 39.83 38.69 38.42 37.15 35.12 33.52
Error reconstruction performance of the proposed method is compared with both non-data
hiding [2, 4, 32, 33] and data hiding based methods [13, 15, 16, 29, 30, 31]. It is observed from
the results shown in Table 7 that our method offers better PSNR (dB) values even if packet loss
rate (PLR) increases to 30%. From Table 7 it is clear that when PLR is low, non-data hiding
based error concealment methods offer similar performance like data hiding based approaches.
On the other hand, the former (non-data hiding based) methods result low improvement in
quality at high burst error condition. This is because the methods [2, 4, 32, 33] recover error
blocks depending on the correlation/interpolation of the received surrounding blocks in the
decoder. As a matter of fact, those schemes work well in simplified loss scenarios where
successfully received data are assumed to be reconstructed loss-free. This is often not the case.
Hence, those schemes return low improvement in quality at high burst error condition (i.e. at
high PLR), as most of the surrounding blocks may be lost. We also compare the result of the
proposed method with the work reported in [13, 15, 16, 29, 30, 31]. Results shown in Table 7
highlight the superiority of our method. The schemes proposed in [15, 16] offer poor
performance as the data (edge information) embedding is done using even/odd embedding. The
even/odd embedding is fragile in nature. The embedded data may be lost due to signal
attenuation. In our scheme, both the above drawbacks are eliminated using M-ary QIM data
hiding and conceal the error quite effectively in burst error condition, even though there is some
quality degradation due to data embedding at the transmitter. Moreover, it is also seen that the
use of data hiding reduces nearly 32.05% computation time of the decoder over [2, 4]. The test
is conducted on Pentium IV, 2.80 GHz processor, with 512 MB RAM using MATLAB 7
version and when 50% of (8×8) blocks are erroneous.
6. CONCLUSION
An image error concealment scheme based on M-ary QIM watermarking is proposed. The
capacity of data hiding is increased with the help of M-ary modulation. The scheme reduces
decoder workload (computation power) nearly about 32.05%, which makes the scheme suitable
for real time error concealment where the decoder may not have sufficient computation power
or implementation done in online. Simulation results also demonstrate that the proposed scheme
provides significant improvement in term of objective evaluations especially for high burst error
condition. Finally, the proposed error concealment scheme can be made use for design of an
adaptive data transmission scheme through the estimation of wireless channel condition using
extracted edge direction information. Future work can be concentrated on performance
improvement of the proposed scheme using channel coding and as well as development of
hardware implementation of the proposed scheme.
18. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
186
REFERENCES
[1] Sklar, B., (1997) “Rayleigh fading channels in mobile digital communication systems, part I:
Characterization”, IEEE Communication Magazine, Vol. 35, No. 9, pp 90–100.
[2] Chen, B. N., & Lin Y, (2006) “Hybrid error concealment using linear interpolation”, In Proc.
International Symposium on Communications and Information Technologies, Bangkok, pp 926 –
931.
[3] Nemethova, O., Moghrabi, A. A., & Rupp M, (2005) “Flexible error concealment for H.264
based on directional interpolation”, In Proc. International Conference on Wireless Networks,
Communications and Mobile Computing, Maui, HI, pp 1255 – 1260.
[4] Ma, M., Au, O. C., Chan, S. H. G., & Sun M. T., (2009) “Edge-directed error concealment”,
IEEE Transaction on Circuits and Systems for Video Technology, doi
10.1109/TCSVT.2009.2035839.
[5] Sun, H., & Kwok W., (1995) “Concealment of damaged block transform coded images using
projection onto convex sets”, IEEE Transaction on Image Processing, Vol. 4, No. 4, pp 470–
477.
[6] Chen, M. J., Chen, C. S., & Chi M. C., (2003) “Recursive block-matching principle for error
concealment algorithm”, In Proc. of International Symposium on Circuits and Systems, Iasi,
Romania, pp 528 -531.
[7] Wong, W. Y. F., Cheng, A. K. Y., & Horace H. S., (2001) “Concealment of damaged blocks by
neighbourhood regions partitioned matching”, In Proc. of International Conference on Image
Processing, Thessaloniki, Greece, pp 45-48.
[8] Cao, J., Li, F., & Guo J., (2003) “An efficient error concealment algorithm for DCT encoded
images”, In Proc. IEEE Canadian Conference on Electrical and Computer Engineering, Canada,
pp 753 – 756.
[9] Rane, S. D., Remus, J., & Sapiro G., (2002) “Wavelet-domain reconstruction of lost blocks in
wireless image transmission and packet-switched networks”, In Proc. International Conference
on Image Processing, Rochester, New York, pp 309-312.
[10] Li, X., & Orchard M. T., (2002) “Novel sequential error-concealment techniques using
orientation adaptive interpolation”, IEEE Transaction on Circuits System Video Technology,
Vol. 12, No. 10, pp 857–864.
[11] Liu, Y., & Li Y., (2000) “Error concealment for digital images using data hiding”, In Proc. of
the Ninth DSP Workshop, pp 1-6.
[12] Adsumilli, C. B., Farias, M. C. Q., Mitra, S. K., & Carli M., (2005) “A robust error concealment
technique using data hiding for image and video transmission over lossy channels”, IEEE
Transactions on Circuits and Systems for Video Technology, Vol. 15, No. 11, pp 1394-1406.
[13] Gur, G., Alagoz, F., & AbdelHafez M., (2005) “A novel error concealment method for images
using watermarking in error-prone channels”, In Proc. 16th
IEEE International Symposium on
Personal, Indoor and Mobile Radio Communications, Germany, pp 2637 – 2641.
[14] Phadikar, A., & Maity S. P., (2009) “Image error concealment based on data hiding using dual-
tree complex wavelets,” In Proc. of the 4th
Indian International Conference of Artificial
Intelligence, Tumkur, India, pp 1764-1781.
[15] Yýlmaz, A., & Alatan A. A., (2003) “Error concealment of video sequences by data hiding”, In
Proc. International Conference on Image Processing, Barcelona, Spain, pp 679-82.
[16] Yin, P., Liu, B., & Yu H. H., (2001) “Error concealment using data hiding”, In Proc IEEE
International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, Utah, pp
1453 – 1456.
[17] C h e n , B . , & W o r n e l l G .
W . , ( 2 0 0 1 )
“ Q u a n t i z a t i o n i n d e x
19. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
187
m o d u l a t i o n : a c l a s s o f
p r o v a b l y g o o d m e t h o d s
f o r d i g i t a l
w a t e r m a r k i n g a n d
i n f o r m a t i o n
e m b e d d i n g ” , IEEE Transaction of Information Theory,
Vol. 47, pp 1423-1443.
[18] Gonzalez, R. C., Woods, R. E., & Eddins S. L., (2005) Digital image processing using Matlab,
Pearson Education.
[19] Maity, S. P., Kundu, M. K., Das, T. S., & Nandi P. K., (2005) “Robustness improvement in
spread spectrum watermarking using M-ary modulation”, In Proc. 11th National Conf. on
Communication, India, pp 569 -573.
[20] Ngan, K. N., & Stelle R., (1982) “Enhancement of PCM and DPCM images corrupted by
transmission errors”, IEEE Transaction on Communication, Vol. 30, pp 257–265.
[21] http://www.petitcolas.net/fabien/watermarking/image_database/index.ht ml.
[22] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli E. P., (2004) “Image quality assessment:
from error measurement to structural similarity”, IEEE Transactions on Image Processing, Vol.
3, pp 1-14.
[23] Boyer, J. P., Duhamel, P., & Talon J. B. (2006) “Performance analysis of scalar DC-QIM for
watermark detection,” In Proc. of IEEE International Conference on Acoustics, Speech and
Signal Processing, Honolulu, Toulouse, pp 237- 240.
[24] Petitcolas, F. A. P., (2000) “Watermarking schemes evaluation”, IEEE Signal Processing, Vol.
17, pp 58–64.
[25] Petitcolas, F. A. P., Anderson, R. J., & Kuhn M. G., (1998) “Attacks on copyright marking
systems”, In Proc. of Second International Workshop on Information Hiding (LNCS 1525),
Springer-Verlag, pp 219–239.
[26] (StirMark 4.0 Online Sources) Available:
<http://www.petitcolas.net/fabien/watermarking/stirmark/>.
[27] Phadikar, A., Maity, S. P., & Mandal M. K., (2008) “Quantization based data hiding scheme for
quality access control of images,” In Proc. of the 12th
IASTED International Conference on
Internet and Multimedia Systems and Applications (IMSA 2008), Hawaii, USA, pp 113-118.
[28] Shirani, S., Kossentini, F., & Ward R., (2000) “Reconstruction of baseline JPEG coded images
in error prone environments”, IEEE Transactions on Image Processing, Vol. 9, No. 7, pp 1292-
1298.
[29] Lin, S. D., & Tsai C. W., (2008) “An effective data embedding for error concealment”, In Proc.
3rd International Conference on Innovative Computing Information and Control, Dalian, pp 28-
32.
[30] Anhari, A. K., Sodagari, S. , & Avanaki A. N., (2008) “Hybrid error concealment in image
communication using data hiding and spatial redundancy,” In Proc. International Conference on
Telecommunications, pp 1 – 5.
[31] Lie, W. N., Lin, T. C. I., Tsai, D. C., & Lin G. S., (2005) “Error resilient coding based on
reversible data embedding technique for H.264/AVC video”, In Proc. International Conference
on Multimedia and Expo, Amsterdam, Netherlands, pp 1174 – 1177.
[32] Wu, J., Liu, X., & Yoo K. Y. (2008) “A temporal error concealment method for H.264/AVC
using motion vector recovery”, IEEE Transaction on Consumer Electronics, Vol. 54, No. 4, pp
1880-1885.
[33] Lee, P. J., Chen, H. H., & Chen L. G., (2004) “A new error concealment algorithm for
H.264/AVC video transmission”, In Proc. International Symposium On Intelligent Multimedia,
Video, and Speech Processing, Hong Kong, pp 619-622.
20. International Journal of Network Security & Its Applications (IJNSA), Vol.2, No.3, July 2010
188
Authors
Amit Phadikar received his B.E. in Computer Sc. and Engineering in
2002 from Vidyasagar University, W.B., India and M.Tech degree in
Information Technology (Artificial Intelligence) in 2005 from Rajiv
Gandhi Proudyogiki Vishwavidyalaya (RGPV), Bhopal, M.P., India.
Since 2005 he is working as Lecturer at the Department of Information
Technology, MCKV Institute of Engineering, Liluha, Howrah, W.B.,
India.
His research areas include digital image watermarking, digital signal
processing, content-based image retrieval. Currently he is working
towards his Ph.D. degree in Engineering (Information Technology) from
Bengal Engineering and Science University, Shibpur, India.
Santi P. Maity received his B.E. in Electronics and Communication
Engineering and M.Tech degree in Microwaves, both from the University
of Burdwan, India. He received his Ph.D. degree in Engineering
(Computer Science and Technology) from Bengal Engineering and
Science University, Shibpur, India. During January 2009 to July, 2009 he
did his pos-doctoral work concerning watermarking in lured applications
in the “Laboratoire des Signaux et Systems (CNRS-Supelec-Universite
Paris-Sud 11)” in France. He is at present working as Associate Professor
at the Department of Information Technology, Bengal Engineering and
Science University, Shibpur.
His research areas include digital image watermarking, multiuser
detection in CDMA, digital signal processing, digital wireless
communication, VLSI watermarking. He has contributed about 80
research papers in well-known and prestigious archival journals,
international refereed conferences and as chapters in edited volumes.