This document compares two image compression techniques: Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). It first describes DCT encoding and decoding. Encoding breaks an image into blocks, applies DCT, and quantizes coefficients. Decoding dequantizes and applies inverse DCT. Simulation results show compressed images from three sample images using 8x8 DCT blocks. DCT achieves good compression ratios but introduces block artifacts. DWT provides better compression without losing as much information but requires more processing power. The document aims to analyze and compare the performance of these two techniques.
A Comparative Study of Image Compression AlgorithmsIJORCS
The document compares three image compression algorithms: Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and a hybrid DCT-DWT algorithm. DCT is used in JPEG and provides simple hardware implementation but can cause blocking artifacts at high compression. DWT provides multi-resolution decomposition and achieves higher compression ratios but requires more computation. The hybrid algorithm aims to combine the advantages of DCT and DWT by applying DWT followed by DCT, allowing for better performance than either individual method. Experimental results showed the hybrid approach generally had better performance in terms of PSNR, MSE, and compression ratio.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
Content Based Video Retrieval in Transformed Domain using Fractional Coeffici...CSCJournals
With the development of multimedia and growing database there is huge demand of video retrieval systems. Due to this, there is a shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval. Good features selection also allows the time and space costs of the retrieval process to be reduced. Different methods[1,2,3] have been proposed to develop video retrievals systems to achieve better performance in terms of accuracy.
The proposed technique uses transforms to extract the features. The used transforms are Discrete Cosine, Walsh, Haar, Kekre, Discrete Sine, Slant and Discrete Hartley transforms. The benefit of energy compaction of transforms in higher coefficients is taken to reduce the feature vector size by taking fractional coefficients[5] of transformed frames of video. Smaller feature vector size results in less time for comparison of feature vectors resulting in faster retrieval of images. The feature vectors are extracted and coefficients sets are considered as feature vectors (100%, 6.25%, 3.125%, 1.5625%, 0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012%, 0.006% and 0.003% of complete transformed coefficients). The database consists of 500 videos spread across 10 categories.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Digital Image Compression using Hybrid Scheme using DWT and Quantization wit...IRJET Journal
This document discusses a hybrid image compression technique using both discrete cosine transform (DCT) and discrete wavelet transform (DWT). It begins with an introduction to image compression and its goals of reducing file size while maintaining quality. Next, it outlines the proposed hybrid compression method, which applies DWT to blocks of the image, then DCT to the approximation coefficients from DWT. This is intended to achieve higher compression ratios than DCT or DWT alone, with fewer blocking artifacts and false contours. Simulation results on various test images show the hybrid method provides higher PSNR and lower MSE than the individual transforms, demonstrating it outperforms them in terms of both quality and compression. The document concludes the hybrid approach is more suitable for
An Efficient Multiplierless Transform algorithm for Video CodingCSCJournals
This paper presents an efficient algorithm to accelerate software video encoders/decoders by reducing the number of arithmetic operations for Discrete Cosine Transform (DCT). A multiplierless Ramanujan Ordered Number DCT (RDCT) is presented which computes the coefficients using shifts and addition operations only. The reduction in computational complexity has improved the performance of the video codec by almost 58% compared with the commonly used integer DCT. The results show that significant computation reduction can be achieved with negligible average peak signal-to-noise ratio (PSNR) degradation. The average structural similarity index matrix (SSIM) also ensures that the degradation due to the approximation is minimal.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
A Comparative Study of Image Compression AlgorithmsIJORCS
The document compares three image compression algorithms: Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and a hybrid DCT-DWT algorithm. DCT is used in JPEG and provides simple hardware implementation but can cause blocking artifacts at high compression. DWT provides multi-resolution decomposition and achieves higher compression ratios but requires more computation. The hybrid algorithm aims to combine the advantages of DCT and DWT by applying DWT followed by DCT, allowing for better performance than either individual method. Experimental results showed the hybrid approach generally had better performance in terms of PSNR, MSE, and compression ratio.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
Content Based Video Retrieval in Transformed Domain using Fractional Coeffici...CSCJournals
With the development of multimedia and growing database there is huge demand of video retrieval systems. Due to this, there is a shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval. Good features selection also allows the time and space costs of the retrieval process to be reduced. Different methods[1,2,3] have been proposed to develop video retrievals systems to achieve better performance in terms of accuracy.
The proposed technique uses transforms to extract the features. The used transforms are Discrete Cosine, Walsh, Haar, Kekre, Discrete Sine, Slant and Discrete Hartley transforms. The benefit of energy compaction of transforms in higher coefficients is taken to reduce the feature vector size by taking fractional coefficients[5] of transformed frames of video. Smaller feature vector size results in less time for comparison of feature vectors resulting in faster retrieval of images. The feature vectors are extracted and coefficients sets are considered as feature vectors (100%, 6.25%, 3.125%, 1.5625%, 0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012%, 0.006% and 0.003% of complete transformed coefficients). The database consists of 500 videos spread across 10 categories.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Digital Image Compression using Hybrid Scheme using DWT and Quantization wit...IRJET Journal
This document discusses a hybrid image compression technique using both discrete cosine transform (DCT) and discrete wavelet transform (DWT). It begins with an introduction to image compression and its goals of reducing file size while maintaining quality. Next, it outlines the proposed hybrid compression method, which applies DWT to blocks of the image, then DCT to the approximation coefficients from DWT. This is intended to achieve higher compression ratios than DCT or DWT alone, with fewer blocking artifacts and false contours. Simulation results on various test images show the hybrid method provides higher PSNR and lower MSE than the individual transforms, demonstrating it outperforms them in terms of both quality and compression. The document concludes the hybrid approach is more suitable for
An Efficient Multiplierless Transform algorithm for Video CodingCSCJournals
This paper presents an efficient algorithm to accelerate software video encoders/decoders by reducing the number of arithmetic operations for Discrete Cosine Transform (DCT). A multiplierless Ramanujan Ordered Number DCT (RDCT) is presented which computes the coefficients using shifts and addition operations only. The reduction in computational complexity has improved the performance of the video codec by almost 58% compared with the commonly used integer DCT. The results show that significant computation reduction can be achieved with negligible average peak signal-to-noise ratio (PSNR) degradation. The average structural similarity index matrix (SSIM) also ensures that the degradation due to the approximation is minimal.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
A novel rrw framework to resist accidental attackseSAT Journals
Abstract Robust reversible watermarking (RRW) methods are popular in multimedia for protecting copyright, while preserving intactness of host images and providing robustness against unintentional attacks. Robust reversible watermarking (RRW) is used to protect the copyrights and providing robustness against unintentional attacks. The past histogram rotation-based methods suffer from extremely poor invisibility for watermarked images and limited robustness in extracting watermarks from the watermarked images destroyed by unintentional attacks. This paper proposes a wavelet-domain statistical quantity histogram shifting and clustering (WSQH-SC) method and Enhanced pixel-wise masking (EPWM). This method embeds a new watermark image and extraction procedures by histogram shifting and clustering, which are important for improving robustness and reducing run-time complexity. It is possible reversibility and invisibility. By using WSQH-SC methods reversibility, invisibility of watermarks can be achieved. The experimental results show the comprehensive performance in terms of reversibility, robustness, invisibility, capacity and run-time complexity widely applicable to different kinds of images. Keywords: — Integer wavelet transform, k-means clustering, masking, robust reversible watermarking (RRW)
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
This document compares the DCT (Discrete Cosine Transform) and DWT (Discrete Wavelet Transform) image compression techniques. It finds that DWT provides higher compression ratios and avoids blocking artifacts compared to DCT. DWT allows for better localization in both spatial and frequency domains. It also has inherent scaling and better identifies visually relevant data, leading to higher compression ratios. However, DCT is faster than DWT. Experimental results on test images show that DWT achieves higher PSNR and lower MSE and BER than DCT, while providing a slightly higher compression ratio and completing compression more quickly.
Transform coding is used in image and video processing to exploit correlations between neighboring pixels. The discrete cosine transform (DCT) is commonly used as it provides good energy compaction and de-correlation. DCT represents data as a sum of variable frequency cosine waves in the frequency domain. It has properties like separability and orthogonality that make it efficient for computation. DCT exhibits excellent energy compaction and removal of redundancy between pixels, making it useful for image and video compression.
This document discusses deep learning and convolutional neural networks. It provides an example of using a CNN for face detection and recognition. The CNN architecture includes convolution and subsampling layers to extract features from images. Backpropagation is used to minimize error and adjust weights. The example detects faces in images with 80% accuracy for faces and 57% for non-faces. Iterative search with a CNN is also used for object recognition in full images.
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...IRJET Journal
This document proposes an approach for image deblurring based on sparse representation and a regularized filter. The approach splits the blurred input image into patches, estimates sparse coefficients for each patch using dictionary learning, updates the dictionary, and estimates the deblur kernel. The deblur kernel is applied using Wiener deconvolution and further processed with a regularized filter to recover the original image. The approach was tested on MATLAB and evaluation metrics like RMSE, PSNR, and SSIM along with visual analysis showed it performed better deblurring compared to existing methods.
The document discusses DCT (Discrete Cosine Transform) based steganography. It introduces steganography and some examples of its historical uses. It then summarizes the basics of DCT, why it is useful for steganography, an example steganography algorithm that embeds messages in the DCT coefficients of images, and possibilities for future improvements like using both steganography and cryptography for increased security. The presentation was created by a group of students for their steganography project.
This document discusses image compression using the discrete cosine transform (DCT). It develops simple Mathematica functions to compute the 1D and 2D DCT. The 1D DCT transforms a list of real numbers into elementary frequency components. It is computed via matrix multiplication or using the discrete Fourier transform with twiddle factors. The 2D DCT applies the 1D DCT to rows and then columns, making it separable. These functions illustrate how Mathematica can be used to prototype image processing algorithms.
This document presents a comparative analysis of digital image watermarking techniques in the frequency domain using MATLAB Simulink. It discusses watermarking using discrete cosine transform (DCT) and discrete wavelet transform (DWT). For DCT, the image is divided into blocks and DCT is applied before embedding the watermark in middle frequency coefficients. For extraction, the same process is reversed. For DWT, the image is decomposed into sub-bands before embedding the watermark into the low-high frequency sub-band. Extraction follows the reverse process. The document also proposes a technique using both DCT and DWT that embeds a watermark into DCT coefficients of DWT sub-bands for increased robust
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...IRJET Journal
This document presents an approach for image deblurring based on sparse representation and a regularized filter. The approach involves splitting the blurred input image into patches, estimating sparse coefficients for each patch, learning dictionaries from the coefficients, and merging the patches. The merged patches are subtracted from the blurred image to obtain the deblur kernel. Wiener deconvolution with the kernel is then applied and followed by a regularized filter to recover the original image without blurring. The approach was tested on MATLAB and evaluation metrics like RMSE, PSNR, and SSIM showed it performed better than existing methods, recovering images with more details and contrast.
Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Imagesijtsrd
A new artifact removal method as cascade of Gaussian fuzzy edge decider and fuzzy image correction is proposed. In this design, a highly compressed i.e. low bit rate image is considered. Here, each overlapped block of image is fed to a Gaussian fuzzy based decider to check whether the central pixel of image block needs correction. Hence, the central pixel of overlapped block is corrected by fuzzy gradient of its neighbors. Experimental results shows remarkable improvement with presented gFAR algorithm compared to the past methods subjectively visual quality and objectively PSNR . Deepak Gambhir "Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33361.pdf Paper Url: https://www.ijtsrd.com/computer-science/multimedia/33361/gaussian-fuzzy-blocking-artifacts-removal-of-high-dct-compressed-images/deepak-gambhir
A BLIND ROBUST WATERMARKING SCHEME BASED ON SVD AND CIRCULANT MATRICEScsandit
Multimedia security has been the aim point of considerable research activity because of its wide
application area. The major technology to achieve copyright protection, content authentication,
access control and multimedia security is watermarking which is the process of embedding data
into a multimedia element such as image or audio, this embedded data can later be extracted
from, or detected in the embedded element for different purposes. In this work, a blind
watermarking algorithm based on SVD and circulant matrices has been presented. Every
circulant matrix is associated with a matrix for which the SVD decomposition coincides with the
spectral decomposition. This leads to improve the Chandra algorithm [1], our presentation will
include a discussion on the data hiding capacity, watermark transparency and robustness
against a wide range of common image processing attacks.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
A Review on Image Compression using DCT and DWTIJSRD
This document reviews image compression techniques using discrete cosine transform (DCT) and discrete wavelet transform (DWT). It discusses how DCT transforms images from spatial to frequency domains, allowing for energy compaction and efficient encoding. DWT is a multi-resolution technique that represents images at different frequency bands. The document analyzes various studies that have used DCT and DWT for compression and compares their performance in terms of metrics like peak signal-to-noise ratio and compression ratio. It finds that DWT generally provides better compression performance than DCT, though DCT requires less computational resources. A hybrid DCT-DWT technique is also proposed to combine the advantages of both methods.
This document summarizes an image watermarking algorithm in the discrete wavelet transform (DWT) domain for image authentication. The algorithm first converts the input image to grayscale and divides the Y component into blocks. It then applies a 2-level DWT and uses a Canny edge detector to generate a watermark from the image contours. The watermark is embedded in the DWT coefficients after applying an Arnold transform for security. In extraction, the watermark is recovered from the DWT coefficients and compared to the original to authenticate the image. Experiments show the algorithm is effective against attacks like image pasting while maintaining high PSNR for perceptual invisibility of the watermark.
This document discusses optimizing image convolution operations for GPUs using CUDA. It describes how to implement a separable convolution filter in two passes, one for rows and one for columns. This reduces redundant data loads compared to a naive single-pass implementation. The document also discusses techniques like loading multiple pixels per thread and padding thread blocks to achieve coalesced global memory accesses and avoid idle threads when processing boundary pixels. Overall, the key optimizations are using a separable filter, loading multiple pixels per thread, and padding for coalesced memory access.
This document discusses fast algorithms for computing the discrete cosine transform (DCT) and inverse discrete cosine transform (IDCT) using Winograd's method.
The conventional DCT and IDCT algorithms have high computational complexity due to cosine functions. Winograd's algorithm reduces the number of multiplications required for matrix multiplication by rearranging terms.
The document proposes applying Winograd's algorithm to DCT and IDCT computation by representing the transforms as matrix multiplications. This approach reduces the number of multiplications required for an 8x8 block from over 16,000 to just 736 multiplications, with fewer additions and subtractions as well. This leads to faster DCT and IDCT computation compared
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesDr. Michael Agbaje
This work examines the effect of block sizes on attributes (robustness, capacity, time of watermarking, visibility and distortion) of watermarked digital images using Discrete Cosine Transform (DCT) function. The DCT function breaks up the image into various frequency bands and allows watermark data to be easily embedded. The advantage of this transformation is the ability to pack input image data into a few coefficients. The block size 8 x 8 is commonly used in watermarking. The work investigates the effect of using block sizes below and above 8 x 8 on the attributes of watermark. The attributes of robustness and capacity increase as the block size increases (62-70db, 31.5-35.9 bit/pixel). The time for watermarking reduces as the block size increases. The watermark is still visible for block sizes below 8 x 8 but invisible for those above it. Distortion decreases sharply from a high value at 2 x 2 block size to minimum at 8 x 8 and gradually increases with block size. The overall observation indicates that watermarked image gradually reduces in quality due to fading above 8 x 8 block size. For easy detection of image against piracy the block size 16 x 16 gives the best output result because it closely resembles the original image in terms of visual quality displayed despite the fact that it contains a hidden watermark.
This document compares JPEG and JPEG2000 image compression techniques using objective and perceptual quality measures. JPEG2000 provides higher PSNR values at all bitrates but JPEG has better picture quality scale (PQS) scores, a perceptual measure, at moderate and high bitrates. At very low bitrates below 0.5 bpp, JPEG2000 produces higher quality images according to PQS due to its wavelet-based compression method. The study uses four test images with different spatial and frequency characteristics to evaluate the compression methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document discusses efficient VLSI implementations of image encryption using minimal operations. It proposes using discrete cosine transform (DCT) for image compression and encryption simultaneously. For encryption, a linear feedback shift register generates random numbers added to some DCT outputs. The DCT algorithm and arithmetic operators are optimized to reduce operations and increase throughput. Simulation results show encryption in the frequency domain at 656 million samples per second on an 82 MHz clock.
A COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMSKate Campbell
This document compares three image compression algorithms: discrete cosine transform (DCT), discrete wavelet transform (DWT), and a hybrid DCT-DWT technique. It finds that the hybrid technique generally performs better in terms of peak signal-to-noise ratio, mean squared error, and compression ratio. The document provides background on each technique and evaluates their performance based on common metrics like PSNR and MSE. It also reviews related work comparing DCT and DWT that found DWT more efficient but slower. The experimental results in this study show that the hybrid DCT-DWT technique provides better performance than either technique individually.
A novel rrw framework to resist accidental attackseSAT Journals
Abstract Robust reversible watermarking (RRW) methods are popular in multimedia for protecting copyright, while preserving intactness of host images and providing robustness against unintentional attacks. Robust reversible watermarking (RRW) is used to protect the copyrights and providing robustness against unintentional attacks. The past histogram rotation-based methods suffer from extremely poor invisibility for watermarked images and limited robustness in extracting watermarks from the watermarked images destroyed by unintentional attacks. This paper proposes a wavelet-domain statistical quantity histogram shifting and clustering (WSQH-SC) method and Enhanced pixel-wise masking (EPWM). This method embeds a new watermark image and extraction procedures by histogram shifting and clustering, which are important for improving robustness and reducing run-time complexity. It is possible reversibility and invisibility. By using WSQH-SC methods reversibility, invisibility of watermarks can be achieved. The experimental results show the comprehensive performance in terms of reversibility, robustness, invisibility, capacity and run-time complexity widely applicable to different kinds of images. Keywords: — Integer wavelet transform, k-means clustering, masking, robust reversible watermarking (RRW)
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
This document compares the DCT (Discrete Cosine Transform) and DWT (Discrete Wavelet Transform) image compression techniques. It finds that DWT provides higher compression ratios and avoids blocking artifacts compared to DCT. DWT allows for better localization in both spatial and frequency domains. It also has inherent scaling and better identifies visually relevant data, leading to higher compression ratios. However, DCT is faster than DWT. Experimental results on test images show that DWT achieves higher PSNR and lower MSE and BER than DCT, while providing a slightly higher compression ratio and completing compression more quickly.
Transform coding is used in image and video processing to exploit correlations between neighboring pixels. The discrete cosine transform (DCT) is commonly used as it provides good energy compaction and de-correlation. DCT represents data as a sum of variable frequency cosine waves in the frequency domain. It has properties like separability and orthogonality that make it efficient for computation. DCT exhibits excellent energy compaction and removal of redundancy between pixels, making it useful for image and video compression.
This document discusses deep learning and convolutional neural networks. It provides an example of using a CNN for face detection and recognition. The CNN architecture includes convolution and subsampling layers to extract features from images. Backpropagation is used to minimize error and adjust weights. The example detects faces in images with 80% accuracy for faces and 57% for non-faces. Iterative search with a CNN is also used for object recognition in full images.
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...IRJET Journal
This document proposes an approach for image deblurring based on sparse representation and a regularized filter. The approach splits the blurred input image into patches, estimates sparse coefficients for each patch using dictionary learning, updates the dictionary, and estimates the deblur kernel. The deblur kernel is applied using Wiener deconvolution and further processed with a regularized filter to recover the original image. The approach was tested on MATLAB and evaluation metrics like RMSE, PSNR, and SSIM along with visual analysis showed it performed better deblurring compared to existing methods.
The document discusses DCT (Discrete Cosine Transform) based steganography. It introduces steganography and some examples of its historical uses. It then summarizes the basics of DCT, why it is useful for steganography, an example steganography algorithm that embeds messages in the DCT coefficients of images, and possibilities for future improvements like using both steganography and cryptography for increased security. The presentation was created by a group of students for their steganography project.
This document discusses image compression using the discrete cosine transform (DCT). It develops simple Mathematica functions to compute the 1D and 2D DCT. The 1D DCT transforms a list of real numbers into elementary frequency components. It is computed via matrix multiplication or using the discrete Fourier transform with twiddle factors. The 2D DCT applies the 1D DCT to rows and then columns, making it separable. These functions illustrate how Mathematica can be used to prototype image processing algorithms.
This document presents a comparative analysis of digital image watermarking techniques in the frequency domain using MATLAB Simulink. It discusses watermarking using discrete cosine transform (DCT) and discrete wavelet transform (DWT). For DCT, the image is divided into blocks and DCT is applied before embedding the watermark in middle frequency coefficients. For extraction, the same process is reversed. For DWT, the image is decomposed into sub-bands before embedding the watermark into the low-high frequency sub-band. Extraction follows the reverse process. The document also proposes a technique using both DCT and DWT that embeds a watermark into DCT coefficients of DWT sub-bands for increased robust
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...IRJET Journal
This document presents an approach for image deblurring based on sparse representation and a regularized filter. The approach involves splitting the blurred input image into patches, estimating sparse coefficients for each patch, learning dictionaries from the coefficients, and merging the patches. The merged patches are subtracted from the blurred image to obtain the deblur kernel. Wiener deconvolution with the kernel is then applied and followed by a regularized filter to recover the original image without blurring. The approach was tested on MATLAB and evaluation metrics like RMSE, PSNR, and SSIM showed it performed better than existing methods, recovering images with more details and contrast.
Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Imagesijtsrd
A new artifact removal method as cascade of Gaussian fuzzy edge decider and fuzzy image correction is proposed. In this design, a highly compressed i.e. low bit rate image is considered. Here, each overlapped block of image is fed to a Gaussian fuzzy based decider to check whether the central pixel of image block needs correction. Hence, the central pixel of overlapped block is corrected by fuzzy gradient of its neighbors. Experimental results shows remarkable improvement with presented gFAR algorithm compared to the past methods subjectively visual quality and objectively PSNR . Deepak Gambhir "Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33361.pdf Paper Url: https://www.ijtsrd.com/computer-science/multimedia/33361/gaussian-fuzzy-blocking-artifacts-removal-of-high-dct-compressed-images/deepak-gambhir
A BLIND ROBUST WATERMARKING SCHEME BASED ON SVD AND CIRCULANT MATRICEScsandit
Multimedia security has been the aim point of considerable research activity because of its wide
application area. The major technology to achieve copyright protection, content authentication,
access control and multimedia security is watermarking which is the process of embedding data
into a multimedia element such as image or audio, this embedded data can later be extracted
from, or detected in the embedded element for different purposes. In this work, a blind
watermarking algorithm based on SVD and circulant matrices has been presented. Every
circulant matrix is associated with a matrix for which the SVD decomposition coincides with the
spectral decomposition. This leads to improve the Chandra algorithm [1], our presentation will
include a discussion on the data hiding capacity, watermark transparency and robustness
against a wide range of common image processing attacks.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
A Review on Image Compression using DCT and DWTIJSRD
This document reviews image compression techniques using discrete cosine transform (DCT) and discrete wavelet transform (DWT). It discusses how DCT transforms images from spatial to frequency domains, allowing for energy compaction and efficient encoding. DWT is a multi-resolution technique that represents images at different frequency bands. The document analyzes various studies that have used DCT and DWT for compression and compares their performance in terms of metrics like peak signal-to-noise ratio and compression ratio. It finds that DWT generally provides better compression performance than DCT, though DCT requires less computational resources. A hybrid DCT-DWT technique is also proposed to combine the advantages of both methods.
This document summarizes an image watermarking algorithm in the discrete wavelet transform (DWT) domain for image authentication. The algorithm first converts the input image to grayscale and divides the Y component into blocks. It then applies a 2-level DWT and uses a Canny edge detector to generate a watermark from the image contours. The watermark is embedded in the DWT coefficients after applying an Arnold transform for security. In extraction, the watermark is recovered from the DWT coefficients and compared to the original to authenticate the image. Experiments show the algorithm is effective against attacks like image pasting while maintaining high PSNR for perceptual invisibility of the watermark.
This document discusses optimizing image convolution operations for GPUs using CUDA. It describes how to implement a separable convolution filter in two passes, one for rows and one for columns. This reduces redundant data loads compared to a naive single-pass implementation. The document also discusses techniques like loading multiple pixels per thread and padding thread blocks to achieve coalesced global memory accesses and avoid idle threads when processing boundary pixels. Overall, the key optimizations are using a separable filter, loading multiple pixels per thread, and padding for coalesced memory access.
This document discusses fast algorithms for computing the discrete cosine transform (DCT) and inverse discrete cosine transform (IDCT) using Winograd's method.
The conventional DCT and IDCT algorithms have high computational complexity due to cosine functions. Winograd's algorithm reduces the number of multiplications required for matrix multiplication by rearranging terms.
The document proposes applying Winograd's algorithm to DCT and IDCT computation by representing the transforms as matrix multiplications. This approach reduces the number of multiplications required for an 8x8 block from over 16,000 to just 736 multiplications, with fewer additions and subtractions as well. This leads to faster DCT and IDCT computation compared
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesDr. Michael Agbaje
This work examines the effect of block sizes on attributes (robustness, capacity, time of watermarking, visibility and distortion) of watermarked digital images using Discrete Cosine Transform (DCT) function. The DCT function breaks up the image into various frequency bands and allows watermark data to be easily embedded. The advantage of this transformation is the ability to pack input image data into a few coefficients. The block size 8 x 8 is commonly used in watermarking. The work investigates the effect of using block sizes below and above 8 x 8 on the attributes of watermark. The attributes of robustness and capacity increase as the block size increases (62-70db, 31.5-35.9 bit/pixel). The time for watermarking reduces as the block size increases. The watermark is still visible for block sizes below 8 x 8 but invisible for those above it. Distortion decreases sharply from a high value at 2 x 2 block size to minimum at 8 x 8 and gradually increases with block size. The overall observation indicates that watermarked image gradually reduces in quality due to fading above 8 x 8 block size. For easy detection of image against piracy the block size 16 x 16 gives the best output result because it closely resembles the original image in terms of visual quality displayed despite the fact that it contains a hidden watermark.
This document compares JPEG and JPEG2000 image compression techniques using objective and perceptual quality measures. JPEG2000 provides higher PSNR values at all bitrates but JPEG has better picture quality scale (PQS) scores, a perceptual measure, at moderate and high bitrates. At very low bitrates below 0.5 bpp, JPEG2000 produces higher quality images according to PQS due to its wavelet-based compression method. The study uses four test images with different spatial and frequency characteristics to evaluate the compression methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document discusses efficient VLSI implementations of image encryption using minimal operations. It proposes using discrete cosine transform (DCT) for image compression and encryption simultaneously. For encryption, a linear feedback shift register generates random numbers added to some DCT outputs. The DCT algorithm and arithmetic operators are optimized to reduce operations and increase throughput. Simulation results show encryption in the frequency domain at 656 million samples per second on an 82 MHz clock.
A COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMSKate Campbell
This document compares three image compression algorithms: discrete cosine transform (DCT), discrete wavelet transform (DWT), and a hybrid DCT-DWT technique. It finds that the hybrid technique generally performs better in terms of peak signal-to-noise ratio, mean squared error, and compression ratio. The document provides background on each technique and evaluates their performance based on common metrics like PSNR and MSE. It also reviews related work comparing DCT and DWT that found DWT more efficient but slower. The experimental results in this study show that the hybrid DCT-DWT technique provides better performance than either technique individually.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
Novel DCT based watermarking scheme for digital imagesIDES Editor
There is an ever growing interest in copyright
protection of multimedia content, thus digital
watermarking techniques are widely practiced. Due to
the internet connectivity and digital libraries the
research interest of protecting digital content
watermarking is extensively researched. In this paper
we present a novel watermark generation scheme
based on the histogram of the image and apply it to the
original image in the transform(DCT) domain. Further
we study the performance of the watermark against
some common attacks that can take place with images.
Experimental results show that the embedded
watermark is imperceptible and image quality is not
degraded.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
Secure High Capacity Data Hiding in Images using EDBTCijsrd.com
Block truncation coding is an efficient compression technique which has low computational complexity but it has two major issues like blocking and false counter effects .So here we have used error-diffused BTC that improves above deficiencies using visual low pass compensation on the bitmap. In this paper complementary hiding EDBTC is developed to resolve the above issue. In this project a single water mark is embedded and then multiple water marks are embedded .usually we use an adaptive external bias factor to embed the watermark but it damages the image quality and robustness .So here we use an extremely small bias factor to control the watermark embedding and this enables a high capacity scenario without significantly damaging image quality. Until now a few data hiding schemes are proposed but it damages the characteristics of BTC. The security of embedded water mark is high that it can’t be easy extracted by the malicious users. The watermark is encrypted by standard encryption algorithm and then it is embedded.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
1) The document discusses wavelet transforms as a recent algorithm for image compression. Wavelet transforms can capture variations at different scales in an image, making them well-suited for reducing spatial redundancy.
2) A typical lossy image compression system uses four main components - source encoding, thresholding, quantization, and entropy encoding - to achieve compression by removing different types of redundancy in images.
3) Experimental results on the Lena test image showed that soft thresholding followed by quantization achieved higher peak signal-to-noise ratios than hard thresholding and quantization, demonstrating the effectiveness of wavelet transforms for image compression.
Jpeg image compression using discrete cosine transform a surveyIJCSES Journal
Due to the increasing requirements for transmission of images in computer, mobile environments, the
research in the field of image compression has increased significantly. Image compression plays a crucial
role in digital image processing, it is also very important for efficient transmission and storage of images.
When we compute the number of bits per image resulting from typical sampling rates and quantization
methods, we find that Image compression is needed. Therefore development of efficient techniques for
image compression has become necessary .This paper is a survey for lossy image compression using
Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image
applications and describes all the components of it.
This document compares the performance of discrete cosine transform (DCT) and wavelet transform for gray scale image compression. It analyzes seven types of images compressed using these two techniques and measured performance using peak signal-to-noise ratio (PSNR). The results show that wavelet transform outperforms DCT at low bit rates due to its better energy compaction. However, DCT performs better than wavelets at high bit rates near 1bpp and above. So wavelets provide better compression performance when higher compression is required.
Ibtc dwt hybrid coding of digital imagesZakaria Zubi
This document proposes a hybrid IBTC-DWT encoding scheme that combines the simple computation and edge preservation of interpolative block truncation coding (IBTC) with the high compression ratio of discrete wavelet transform (DWT). Simulation results showed that the proposed algorithm achieved better performance than IBTC-DCT in terms of compression ratio, bit rate, and reconstruction quality at low bit rates. The hybrid approach reduces computational complexity by applying DWT to the smaller sub-images produced by IBTC.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation of Micr...IJAAS Team
We can find the simultaneous monitoring of thousands of genes in parallel Microarray technology. As per these measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Gridding, Intensity extraction, Enhancement and Segmentation are important steps in microarray image analysis. This paper gives simple linear iterative clustering (SLIC) based self organizing maps (SOM) algorithm for segmentation of microarray image. The clusters of pixels which share similar features are called Superpixels, thus they can be used as mid-level units to decrease the computational cost in many vision applications. The proposed algorithm utilizes superpixels as clustering objects instead of pixels. The qualitative and quantitative analysis shows that the proposed method produces better segmentation quality than k-means, fuzzy cmeans and self organizing maps clustering methods.
This document compares image encryption using discrete cosine transforms (DCT) and discrete wavelet transforms (DWT). It first provides background on DCT and DWT, explaining how each decomposes an image. It then analyzes the security of encrypting an image using each transform, measuring correlation, mean square error, and peak signal-to-noise ratio. The document finds that DWT provides better encryption, with each level of decomposition having different security statistics. It concludes that DWT is a better decomposition technique for encrypting images.
Robust Watermarking Technique using 2D Logistic Map and Elliptic Curve Crypto...idescitation
Copyright protection is a vital issue in modern day’s data transmission over
internet. For copyright protection, watermarking technique is extensively used. In this
paper, we have proposed a robust watermarking scheme using 2D Logistic map and elliptic
curve cryptosystem (ECC) in the DWT domain. The combined encryption has been taken to
enhance the security of the watermark before the embedding phase. The PSNR value shows
the difference between original cover and embedded cover is minimal. Similarly, NC values
show the robustness and resistance capability of the proposed technique from the common
attacks such as scaling, Gaussian noise etc. Thus, this combined version of 2D Logistic map
and Elliptic curve cryptosystem can be used in case of higher security requirement of the
watermark signal.
Similar to 2.[9 17]comparative analysis between dct & dwt techniques of image compression (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
2.[9 17]comparative analysis between dct & dwt techniques of image compression
1. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
Comparative Analysis between DCT & DWT Techniques of
Image Compression
Anilkumar Katharotiya1* Swati Patel1 Mahesh Goyani1
1. Department of Computer Engineering, LDCE, GTU, Ahmedabad, Gujarat, India.
* E-mail of the corresponding author: anil_katharotiya2000@yahoo.com
Abstract
Image compression is a method through which we can reduce the storage space of images, videos
which will helpful to increase storage and transmission process’s performance. In image compression,
we do not only concentrate on reducing size but also concentrate on doing it without losing quality and
information of image. In this paper, two image compression techniques are simulated. The first
technique is based on Discrete Cosine Transform (DCT) and the second one is based on Discrete
Wavelet Transform (DWT). The results of simulation are shown and compared different quality
parameters of its by applying on various images
Keywords: DCT, DWT, Image compression, Image processing
1. Introduction
In modern day, many applications need large number of images for solving problems. Digital image [1]
can be store on disk. This storing space of image is also important. Because less memory space means
less time of required to processing for image. Here the concept of image compression comes. “Image
compression[1] means reduced the amount of data required to represent a digital image”. There are
many applications [2] where the image compression is used to effectively increased efficiency and
performance. Application are like Health Industries, Retail Stores, Federal Government Agencies,
Security Industries, Museums and Galleries etc.
1.1 Requirement for image compression:
An image compression system needs to have at least the following two components:
a. Encoding System
b. Decoding System
Encoding System takes an original image as input and after processing on this, it gives compressed
image as output. While Decoding System takes an compressed image as input and gives the image as
output which is more identical to original image.
Nowadays, DCT[1,3,4,5] and DWT[1,3,7] are the most popular techniques for image compression.
Both techniques are frequency based techniques, not spatial based. Both techniques have its’ own
advantages and disadvantage. Like DWT gives better compression ratio [1,3] without losing more
information of image but it need more processing power. While in DCT need low processing power but
it has blocks artifacts means loss of some information. Our main goal is to analyze both techniques and
comparing its results.
2. DCT Technique
Several techniques can transform an image into frequency domain, such as DCT, DFT [1] and wavelet
transform. Each transform has its advantages. First here the DCT technique is discussed.
The most common DCT definition of a 1-D sequence of length N is:
2n 1 kπ
Yk Ck X n cos
2N
(1)
For k= 0,1,2,…,N− 1. Similarly, the inverse DCT transformation is defined as
9|Page
www.iiste.org
2. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
2n 1 kπ
Xn C k Y k cos
2N
(2)
For k= 0,1,2,…,N− 1 . In both equations (1.1) and (1.2) C[n] is defined as
! n 0 ,
C[n] = +
$ +
! % 1,2, … , ( ) 1
*
The 2-D DCT is a direct extension of the 1-D case and is given by:
2m 1 jπ 2n 1 kπ
y j, k CjCk x m, n cos cos
2N 2N
1
(3)
Where: j, k = 0,1,2,…,N −1 and. The inverse transform is defined as:
2m 1 jπ 2n 1 kπ
x m, n C j C k y j, k cos cos
2N 2N
2
(4)
Where: m, n = 0, 1, 2, …, N −1. And c[n] is as it is as in 1-D transformation.
Discrete cosine transform (DCT) is widely used in image processing, especially for compression
algorithm for encoding and decoding in DCT technique is shown below.
2.1Encoding System
There are four steps in DCT technique to encode or compress the image
Step1. The image is broken into N*N blocks of pixels. Here N may be 4, 8, 16,etc.
Step2. Working from left to right, top to bottom, the DCT is applied to each block.
Step3. Each block’s elements are compressed through quantization means dividing by some specific
value.
Step4. The array of compressed blocks that constitute the image is stored in a drastically reduced
amount of space.
So first the whole image is divided into small N*N blocks then DCT is applied on these blocks. After
that for reducing the storage space DCT coefficients [5] are quantized through dividing by some value
or by quantization matrix. So that large value is become small and it need small size of space. This step
is lossy step. So selection of quantization value or quantization matrix[10] is affect the entropy and
compression ratio. If we take small value for quantization then we get the better quality or less
MSE(Mean Square Error) but less compression ratio. Block size value also affects quality and
compression ratio. Simply the higher the block size higher the compression ratio but with loss of more
information and quality.
2.2 Decoding System
Decoding system is the exact reverse process of encoding. There are four steps for getting the original
image not exact but identical to original from compressed image.
Step1. Load compressed image from disk
Step2. Image is broken into N*N blocks of pixels.
Step3. Each block is de-quantized by applying reverse process of quantization.
Step4. Now apply inverse DCT on each block. And combine these blocks into an image which is
identical to the original image.
In this decoding process, we have to keep N’s value same as it used in encoding process. Then we do
10 | P a g e
www.iiste.org
3. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
de-quantization process by multiplying with quantization value or quantization matrix. As earlier said
that this is lossy technique so output image is not exact copy of original image but it is same as original
image. So this process’ efficiency is measure by compression ratio. Compression ratio[3] is defined by
ratio of storage bits of original image and storage bits of compressed image.
%1
3!
%2
(5)
Where n1 is number of bits to store original image and n2 is number of bits to store compressed image.
Loss of information is measure by Mean square Error (MSE)[1,5] between reconstructed image and
original image. If MSE of reconstructed image to original image is greater than the information lost is
more.
>
$
456 78 9, : ) 8 ; 9, : <
? =
(6)
Where M,N is dimension of image. x(i, j) is pixel value of (i,j) coordinate of original image while
x’(i,j) is the reconstructed image’s pixel value.
2.3 Simulation Results:
For Simulation, we apply DCT technique on three different images by choosing 8x8 block size. These
three original images and output images are shown below. All three images have different size.
Original Image Compressed Image
Logo Logo
Baby Baby
11 | P a g e
www.iiste.org
4. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
Penguins Penguins
Fig 1 Comparison between original image and DCT based compressed image.
In Fig 1 we can see the reconstructed image is not exact as the original image. But all are identical to
their original image. DCT has block artifacts. We can see that in compressed image of baby, there are
block artifacts on her hand’s picture. If we choose small size of block then the block artifacts is
minimized. By using 8x8 block size and applying quantization we minimized the each pixel value 0 to
32 from 0 to 256. So one pixel needs 5 bits to represent its value on behalf of 8 bits. Thus we achieve
Cr=8/5=1.6 which is quite reasonable.
The following table shows MSE of each of three images. It shows how much information we have lost
due to our compression technique. There is also shown total MSE of original image with zero image.
So we can analyze that how many percentage of the information we loss out of total information .
Table 1 MSE of output images of DCT technique
Image name MSE Total (MSE of
original image
with Zero Image)
Logo 15368164 2.19 x 10
Baby 10289294 2.11 x 10
Penguins 17012605 2.10 x 10
2.DWT Technique
Wavelet analysis [1,3,7] can be used divided the information of an image into approximation and detailed
sub signal[3]. The approximation sub signal shows the general trend of pixel value, and three detailed sub
signal show vertical, horizontal and diagonal details or changes in image. If these detail is very small than
they can be set to zero without significantly changing the image. If the number of zeroes is greater than
the compression ratio is also greater. There is two types of wavelet is used. First one is Continues
wavelet transform[1] and second one is Discrete wavelet transform.[1] Wavelet analysis is computed by
filter bank. There is two type of filter
1) High pass filter[1]: high frequency information is kept, low frequency information is lost.
2) Low pass filter[1]: law frequency information is kept, high frequency information is lost.
So signal is effectively decomposed into two parts, a detailed part(high frequency) and approximation
part(law frequency). Level 1 detail is horizontal detail, level2 detail is vertical detail and level 3 detail is
diagonal detail of the image signal.
12 | P a g e
www.iiste.org
5. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
Fig 2 Visual representation of the decomposition of a one dimensional input source using a wavelet
transformation using three passes.
3.1Encoding System
Six steps process for compressing an image with Discrete wavelet transform is shown below.
Step1.First original image have to been passed through high pass filter and low pass filter by applying
filter on each row.
Step2.now output of the both image l1 and h1 are combine into t1=[l1 h1].
Step3. T1 is down sampled by 2.
Step4. Now, again T1 has been passed through high pass filter and low filter by applying on each
column.
A2
Step5. Output of the step4 is supposed l2 and h2. Then l2 and h2 is combine into t3=@ C.
B2
Step6. Now down sampled t3 by 2.This is our compressed image.
Fig 3. Compressed Image(penguins)
In fig 3 there are shown a resulted image after applying encoding process. In this fig we can see four
blocks. The first upper half block shows the approximation, while second upper half is shows
13 | P a g e
www.iiste.org
6. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
horizontal detail. First lower level block shows vertical detail and second lower level block shows
diagonal detail.
In algorithm there is shown one level discrete wavelet transform. You can also increase the level of
DWT by applying this process more than one time. Second and third level DWT gives the better
compression ratio. But it will come with loss of some information. First level DWT is quite reasonable
for both achieving high compression ratio and also got quality (less MSE). We can get Cr = 2 to 2.5
which is very beneficial for us.
3.2 Decoding System.
Here decoding system’s process is not exact reverse of encoding system’s process. Steps are shown
below.
Step1.Extract low pass filter image and high pass filter image from compressed image simply by taking
upper half rectangle of matrix is low pass filter image and down half rectangle is high pass filter image
Step2. Both images are up sampled by 2.
Step3.Now we take the summation of both images into one image called r1.
Step4. Then again extract low pass filter image and high pass filter image by simply dividing vertically.
First half is low pass filtered image. And second half is high pass filter image.
Step5. Take summation of both images that is out reconstructed image.
Though in DWT, we get very high compression ratio, we lose minimum amount of information. But if
we do more than one level then we get more compression ratio but the reconstructed image is not
identical to original image. MSE is greater if DWT apply more than one level. In nowadays, this
technique is use in JPEG2000 [1] algorithm as one step of its. We think that the we get better result in
DWT. But that’s not always true. This better result comes in cost of processing power.
3.3 Simulation Results:
As we did earlier in DCT, this technique is applied on three images and results of these images are
presented here.
Original Image Compressed Image
Logo Logo
14 | P a g e
www.iiste.org
7. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
Baby Baby
Penguins Penguins
Fig 4 Comparison between original image and DWT based compressed image.
Table 2 MSE of output images of DWT technique
Image name MSE Total (MSE of
original image
with Zero Image)
Logo 7.23 x 10D 2.19 x 10
Baby 1.36 x 10 D
2.11 x 10
Penguins 8.05 x 10 D
2.10 x 10
The output images show that there is no any block artifacts. Because we apply DWT on whole image,
not on block.We got Cr=1.9 to 2.3 compression ratio. MSE of reconstructed images are also less as
shown in table 2.
4. Result analysis comparison between DCT and DWT techniques
For DCT technique we can achieve the Cr=1.6 compression ratio.
For DWT technique we can achieve the Cr=1.9 to 2.3 compression ratio.
Now using the table I and table II we draw two graphs for analyze the data.
15 | P a g e
www.iiste.org
8. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
Fig 5 Graph for DCT, DWT information loses and total information
Fig 5’s graph shows the comparison of DCT and DWT compressed image with its original information.
We can say that lose of information is quite negligible in both technique.
Fig 6 Graph for comparing DCT, DWT information lose
Fig 6’s graph shows the comparison of lost of information in DCT and DWT technique. From this we
conclude that in DWT information loss is less than information loss in DCT. So quality wise the DWT
technique is better than DCT technique, but in performance time wise DCT is better than DWT
technique.
5. Conclusion
By doing these experiments we conclude that both techniques have its’ own advantage and
disadvantage. But, both techniques are quite efficient for image compression. We can get quite
reasonable compression ratio without loss of much important information. Though our experiments
show that DWT[1,3,7] technique is much efficient than DCT[1,3,5,6] technique in quality and
efficiency wise. But in performance time wise DCT is better than DWT
References
[1] Rafael C. Gonzalez, Richard E. Woods.(1992), Digital Image Processing(2nd edition), NJ:Prentice
Hall
[2] LockerGnome(2011),”Real World Application Of Image Compression”
http://www.lockergnome.com/nexus/windows/2006/12/25/real-world-applications-of-image-
compression/ [accesed 11 Dec 2011]
16 | P a g e
www.iiste.org
9. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5758 (print) ISSN 2224-896X (online)
Vol 1, No.2, 2011
[3] Swastik Das and Rashmi Ranjan Sethy, “A Thesis on Image Compression using Discrete Cosine
Transform and Discrete Wavelet Transform”, Guided By: Prof. R. Baliarsingh, dept of Computer
Science & Engineering, National Institute of Rourkela.
[4] Andrew B. Watson, NASA Ames Research, “Image Compression Using the Discrete Cosine
Transform”, Mathematica Journal, 4(1),1994, p. 81-88.
[5] M. Stumpfl, “Digital Watermarking”, University of Southampton, 2001.
[6] Nikolay N. Ponomarenko, Vladimir V. Lukin, Karen Egiazarian, Jaakko Astola. DCT Based High
Quality Image Compression. In Proceedings of SCIA'2005. pp.1177~1185
[7] Karen Lees "Image compression using Wavelets", Report of M.S. 2002
[8] Saeid Saryazdi and Mehrnaz Demehr(2005),”A blind DCT domain digital watermarking ”,
Proceeding of 3rd International Conference: SETIT ,Tunisia:march-2005
[9] G. R. Ramesh, and K. Rajgopal, "Binary Image Compression using the Radon Transform, " in
IEEE XVI Annual Convention and Exhibition, pp.178-182, 1990.
[10] Nopparat Pantaesaena, M.Sangworaisl, C. Nantajiwakornchai and T. Phanpraist, “Image
compression using vector quantization”, ReCCIT, Thialand,2005
17 | P a g e
www.iiste.org