The standard JPEG format is almost the optimum format in image compression. The compression ratio in
JPEG sometimes reaches 30:1. The compression ratio of JPEG could be increased by embedding the Five
Modulus Method (FMM) into the JPEG algorithm. The novel algorithm gives twice the time as the standard
JPEG algorithm or more. The novel algorithm was called FJPEG (Five-JPEG). The quality of the
reconstructed image after compression is approximately approaches the JPEG. Standard test images have
been used to support and implement the suggested idea in this paper and the error metrics have been
computed and compared with JPEG.
ROI Based Image Compression in Baseline JPEGIJERA Editor
To improve the efficiency of standard JPEG compression algorithm an adaptive quantization technique based on the support for region of interest of compression is introduced. Since this is a lossy compression technique the less important bits are discarded and are not restored back during decompression. Adaptive quantization is carried out by applying two different quantization to the picture provided by the user. The user can select any part of the image and enter the required quality for compression. If according to the user the subject is more important than the background then more quality is provided to the subject than the background and vice- versa. Adaptive quantization in baseline sequential JPEG is carried out by applying Forward Discrete Cosine Transform (FDCT), two different quantization provided by the user for compression, thereby achieving region of interest compression and Inverse Discrete Cosine Transform (IDCT) for decompression. This technique makes sure that the memory is used efficiently. Moreover we have specifically designed this for identifying defects in the leather samples clearly.
The document proposes a Modified Fuzzy C-Means (MFCM) clustering algorithm to segment chromosomal images. The MFCM includes preprocessing steps of median filtering and image enhancement to address noise sensitiveness and segmentation error problems in existing methods. It achieves improved segmentation accuracy of 61.6% compared to 56.4%, 55.47%, and 57.6% for Fuzzy C-Means, Adaptive Fuzzy C-Means, and Improved Adaptive Fuzzy C-Means respectively. The MFCM results in higher quality segmented images as indicated by its lower mean square error and higher peak signal-to-noise ratio values.
This document compares image enhancement and analysis techniques using image processing and wavelet techniques on thermal images. It discusses various image enhancement methods such as converting images to grayscale, histogram equalization, contrast enhancement, linear and adaptive filtering, morphology, FFT transforms, and wavelet-based techniques including image fusion, denoising, and compression. Results showing enhanced, denoised, and compressed images are presented and analyzed. The document concludes that wavelet techniques provide better enhancement of thermal images compared to traditional image processing methods.
Satellite image compression algorithm based on the fftijma
Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the
image to an unacceptable level ,the reduction in file size allows more images to be stored in a given amount
of disk or memory space, it also reduces the time required for images to be sent over the ground This paper
presents a new coding scheme for satellite images. In this study we apply the fast Fourier transform and the
scalar quantization for standard LENA image and satellite image, The results obtained after the (SQ) phase
are encoded using entropy encoding, after decompression, the results show that it is possible to achieve
higher compression ratios, more than 78%, the results are discussed in the paper.
The document describes a method for image fusion and optimization using stationary wavelet transform and particle swarm optimization. It summarizes that image fusion combines information from multiple images to extract relevant information. The proposed method uses stationary wavelet transform for image decomposition and particle swarm optimization to optimize the fused results. It applies stationary wavelet transform to source images to decompose them into wavelet coefficients. Particle swarm optimization is then used to optimize the transformed images. The inverse stationary wavelet transform is applied to the optimized coefficients to generate the fused image. The method is tested on various images and performance is evaluated using metrics like peak signal-to-noise ratio, entropy, mean square error and standard deviation.
PURPOSES: this study aims to perform microcalsification detection by performing image enhancement in mammography image by using transformation of negative image and histogram equalization. METHOD: image mammography with .pgm format changed to. jpg format then processed into negative image result then processed again using histogram equalization. RESULT: the results of the image enhancement process using negative image techniques and equalization histograms are compared and validated with MSE and PSNR on each mammographic image. CONCLUSION: Image enhancement process on mammography image can be done, however there are only some image that have improved quality, this affected by threshold usage, which have important role to get better visualization on mammographic image.
Application of Digital Image Processing in Drug IndustryIOSRjournaljce
This document summarizes four digital image processing techniques used to detect defects in tablet strips: morphology operations, template matching, mathematical manipulation, and Euler's method. Morphology operations can detect broken tablets, template matching and mathematical manipulation can find broken and missing tablets, and Euler's method identifies holes in tablets. The techniques are applied to tablet strip images in MATLAB and effectively detect various defects. In summary, digital image processing provides a way to automatically inspect tablet strips for defects in pharmaceutical manufacturing.
A vlsi architecture for efficient removal of noises and enhancement of imagesIAEME Publication
This document summarizes a proposed VLSI architecture for removing noise and enhancing images. The architecture uses a decision tree-based approach with three modules: an isolation module to identify noisy pixels, a fringe module to determine if pixels are on edges, and a similarity module to further analyze pixels. Noisy pixels are replaced with reconstructed values from an edge-preserving filter. Histogram equalization is then applied to improve image quality. The proposed architecture is implemented on FPGA and evaluated on different types of noises using metrics like PSNR and MSE.
ROI Based Image Compression in Baseline JPEGIJERA Editor
To improve the efficiency of standard JPEG compression algorithm an adaptive quantization technique based on the support for region of interest of compression is introduced. Since this is a lossy compression technique the less important bits are discarded and are not restored back during decompression. Adaptive quantization is carried out by applying two different quantization to the picture provided by the user. The user can select any part of the image and enter the required quality for compression. If according to the user the subject is more important than the background then more quality is provided to the subject than the background and vice- versa. Adaptive quantization in baseline sequential JPEG is carried out by applying Forward Discrete Cosine Transform (FDCT), two different quantization provided by the user for compression, thereby achieving region of interest compression and Inverse Discrete Cosine Transform (IDCT) for decompression. This technique makes sure that the memory is used efficiently. Moreover we have specifically designed this for identifying defects in the leather samples clearly.
The document proposes a Modified Fuzzy C-Means (MFCM) clustering algorithm to segment chromosomal images. The MFCM includes preprocessing steps of median filtering and image enhancement to address noise sensitiveness and segmentation error problems in existing methods. It achieves improved segmentation accuracy of 61.6% compared to 56.4%, 55.47%, and 57.6% for Fuzzy C-Means, Adaptive Fuzzy C-Means, and Improved Adaptive Fuzzy C-Means respectively. The MFCM results in higher quality segmented images as indicated by its lower mean square error and higher peak signal-to-noise ratio values.
This document compares image enhancement and analysis techniques using image processing and wavelet techniques on thermal images. It discusses various image enhancement methods such as converting images to grayscale, histogram equalization, contrast enhancement, linear and adaptive filtering, morphology, FFT transforms, and wavelet-based techniques including image fusion, denoising, and compression. Results showing enhanced, denoised, and compressed images are presented and analyzed. The document concludes that wavelet techniques provide better enhancement of thermal images compared to traditional image processing methods.
Satellite image compression algorithm based on the fftijma
Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the
image to an unacceptable level ,the reduction in file size allows more images to be stored in a given amount
of disk or memory space, it also reduces the time required for images to be sent over the ground This paper
presents a new coding scheme for satellite images. In this study we apply the fast Fourier transform and the
scalar quantization for standard LENA image and satellite image, The results obtained after the (SQ) phase
are encoded using entropy encoding, after decompression, the results show that it is possible to achieve
higher compression ratios, more than 78%, the results are discussed in the paper.
The document describes a method for image fusion and optimization using stationary wavelet transform and particle swarm optimization. It summarizes that image fusion combines information from multiple images to extract relevant information. The proposed method uses stationary wavelet transform for image decomposition and particle swarm optimization to optimize the fused results. It applies stationary wavelet transform to source images to decompose them into wavelet coefficients. Particle swarm optimization is then used to optimize the transformed images. The inverse stationary wavelet transform is applied to the optimized coefficients to generate the fused image. The method is tested on various images and performance is evaluated using metrics like peak signal-to-noise ratio, entropy, mean square error and standard deviation.
PURPOSES: this study aims to perform microcalsification detection by performing image enhancement in mammography image by using transformation of negative image and histogram equalization. METHOD: image mammography with .pgm format changed to. jpg format then processed into negative image result then processed again using histogram equalization. RESULT: the results of the image enhancement process using negative image techniques and equalization histograms are compared and validated with MSE and PSNR on each mammographic image. CONCLUSION: Image enhancement process on mammography image can be done, however there are only some image that have improved quality, this affected by threshold usage, which have important role to get better visualization on mammographic image.
Application of Digital Image Processing in Drug IndustryIOSRjournaljce
This document summarizes four digital image processing techniques used to detect defects in tablet strips: morphology operations, template matching, mathematical manipulation, and Euler's method. Morphology operations can detect broken tablets, template matching and mathematical manipulation can find broken and missing tablets, and Euler's method identifies holes in tablets. The techniques are applied to tablet strip images in MATLAB and effectively detect various defects. In summary, digital image processing provides a way to automatically inspect tablet strips for defects in pharmaceutical manufacturing.
A vlsi architecture for efficient removal of noises and enhancement of imagesIAEME Publication
This document summarizes a proposed VLSI architecture for removing noise and enhancing images. The architecture uses a decision tree-based approach with three modules: an isolation module to identify noisy pixels, a fringe module to determine if pixels are on edges, and a similarity module to further analyze pixels. Noisy pixels are replaced with reconstructed values from an edge-preserving filter. Histogram equalization is then applied to improve image quality. The proposed architecture is implemented on FPGA and evaluated on different types of noises using metrics like PSNR and MSE.
1) The document proposes a hybrid quantization method for JPEG image compression to overcome limitations in the standard JPEG method.
2) In standard JPEG, only one quantization matrix is used for the entire image, but images have varying frequency contents. The hybrid method selects the quantization matrix based on the frequency content of each image block.
3) Lower quality matrices like Q10-Q40 can improve compression for blocks with more high frequencies, while higher quality matrices like Q50-Q90 can improve image quality by retaining more low frequencies. This provides a better tradeoff between compression ratio and image quality than the standard method.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
Image compression using genetic programmingAhmed Ebid
The fast growth in digital image applications such as web sites, multimedia and even personal image archives encouraged researchers to develop advanced techniques to compress images. Many compression techniques where introduced whether reversible or not. Most of those techniques were based on statistical analysis of repetition or mathematical transforming to reduce the size of the image. This research is concerning in applying Genetic programing (GP) technique in image compression. In order to achieve that goal, a parametric study was carried out to determine the optimum combination of (GP) parameters to achieve maximum quality and compression ratio. For simplicity the study considered 256 level gray scale image. A special C++ software was developed to carry out all calculations, the compressed images was rendered using Microsoft Excel. Study results was compared with JPEG results as one of the most popular lossy compression techniques. It is concluded that using optimum (GP) parameters leads to acceptable quality (objectively and subjectively) corresponding to compression ratio ranged between 2.5 and 4.5.
Image processing techniques in nm 08,09Rutuja Solkar
Digital images in nuclear medicine consist of grids of pixels that represent discrete picture elements. Image processing techniques are used to analyze these images. Key techniques include:
1. Visualizing images by adjusting grayscale, color scale, and windowing.
2. Defining regions and volumes of interest to extract numerical data from tissues.
3. Co-registering images acquired at different times to compare changes in the same subject.
4. Creating time-activity curves from series of frames to analyze radiotracer uptake over time in regions of interest.
5. Smoothing images reduces noise but blurs details, while edge detection and segmentation identify boundaries and classify tissues.
Segmentation of Tumor Region in MRI Images of Brain using Mathematical Morpho...CSCJournals
This paper introduces an efficient detection of brain tumor from cerebral MRI images. The methodology consists of two steps: enhancement and segmentation. To improve the quality of images and limit the risk of distinct regions fusion in the segmentation phase an enhancement process is applied. We applied mathematical morphology to increase the contrast in MRI images and to segment MRI images. Some of experimental results on brain images show the feasibility and the performance of the proposed approach.
Face Recognition Using Neural Network Based Fourier Gabor Filters & Random Pr...CSCJournals
Face detection and recognition has many applications in a variety of fields such as authentication, security, video surveillance and human interaction systems. In this paper, we present a neural network system for face recognition. Feature vector based on Fourier Gabor filters is used as input of our classifier, which is a Back Propagation Neural Network (BPNN). The input vector of the network will have large dimension, to reduce its feature subspace we investigate the use of the Random Projection as method of dimensionality reduction. Theory and experiment indicates the robustness of our solution.
This document summarizes an article that proposes modifications to the JPEG 2000 image compression standard to achieve higher compression ratios while maintaining acceptable error rates. The proposed Adaptive JPEG 2000 technique involves pre-processing images with a transfer function to make them more suitable for compression by JPEG 2000. This is intended to provide higher compression ratios than the original JPEG 2000 standard while keeping root mean square error within allowed thresholds. The document provides background on JPEG 2000 and lossy image compression techniques, describes the proposed pre-processing approach, and indicates it was tested on single-channel images.
This document provides an overview of modern techniques for detecting video forgeries through a literature review. It discusses detecting double MPEG compression, which can identify tampering by analyzing artifacts introduced during recompression. Methods are presented for detecting duplicated frames or regions, extending image forgery detection to videos, combining artifacts across screen shots, and using multimodal feature fusion. Ghost shadow artifacts from video inpainting are also discussed as a technique for detecting forgeries. The literature review assesses these various video forgery detection methods and their applicability to different situations.
Feature Extraction of an Image by Using Adaptive Filtering and Morpological S...IOSR Journals
Abstract: For enhancing an image various enhancement schemes are used which includes gray scale manipulation, filtering and Histogram Equalization, Where Histogram equalization is one of the well known image enhancement technique. It became a popular technique for contrast enhancement because it is simple and effective. The basic idea of Histogram Equalization method is to remap the gray levels of an image. Here using morphological segmentation we can get the segmented image. Morphological reconstruction is used to segment the image. Comparative analysis of different enhancement and segmentation will be carried out. This comparison will be done on the basis of subjective and objective parameters. Subjective parameter is visual quality and objective parameters are Area, Perimeter, Min and Max intensity, Avg Voxel Intensity, Std Dev of Intensity, Eccentricity, Coefficient of skewness, Coefficient of Kurtosis, Median intensity, Mode intensity. Keywords: Histogram Equalization, Segmentation, Morphological Reconstruction .
Nowadays, image manipulation is common due to the availability of image processing software, such as Adobe Photoshop or GIMP. The original image captured by digital camera or smartphone normally is saved in the JPEG format due to its popularity. JPEG algorithm works on image grids, compressed independently, having size of 8x8 pixels. For unmodified image, all 8x8 grids should have a similar error level. For resaving operation, each block should degrade at approximately the same rate due to the introduction of similar amount of errors across the entire image. For modified image, the altered blocks should have higher error potential compred to the remaining part of the image. The objective of this paper is to develop a photo forensics algorithm which can detect any photo manipulation. The error level analysis (ELA) was further enhanced using vertical and horizontal histograms of ELA image to pinpoint the exact location of modification. Results showed that our proposed algorithm could identify successfully the modified image as well as showing the exact location of modifications.
Efficient Image Compression Technique using JPEG2000 with Adaptive ThresholdCSCJournals
Image compression is a technique to reduce the size of image which is helpful for transforms. Due to the limited communication bandwidth we have to need optimum compressed image with good visual quality. Although the JPEG2000 compression technique is ideal for image processing as it uses DWT (Discrete Wavelet Transform).But in this paper we proposed fast and efficient image compression scheme using JPEG2000 technique with adaptive subband threshold. Actually we used subband adaptive threshold in decomposition section which gives us more compression ratio and good visual quality other than existing compression techniques. The subband adaptive threshold that concentrates on denoising each subband (except lowest coefficient subbands) by minimizing insignificant coefficients and adapt with modified coefficients which are significant and more responsible for image reconstruction. Finally we use embedded block coding with optimized truncation (EBCOT) entropy coder that gives three different passes which gives more compressed image. This proposed method is compared to other existing approach and give superior result that satisfy the human visual quality and also these resulting compressed images are evaluated by the performance parameter PSNR.
1. The document proposes a Modified Fuzzy C-Means (MFCM) algorithm to segment brain tumors in noisy MRI images.
2. The conventional Fuzzy C-Means algorithm is sensitive to noise, so the MFCM adds an adaptive filtering step during segmentation.
3. The MFCM incorporates neighboring pixel membership values to reduce each pixel's resistance to being clustered, improving segmentation in noisy images.
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
code with ressult
ABSTRACT
Method of compression which is Huffman coding based on histogram information and image segmentation. It is used for lossless and lossy compression. Theamount of image will be compressed in lossy manner, and in lossless manner, depends on theinformation obtained by the histogram of the image. The results show that the difference betweenoriginal and compressed images is visually negligible. The compression ratio(CR) and peak signal tonoise ratio(PSNR) are obtained for different images. The relation between compression ratio and peaksignal to noise ratio shows that whenever we increase compression ratio we get PSNR high. We can alsoobtain minimum mean square error. It shows that if we get high PSNR than our image quality is better.
This document presents a study on medial axis transformation (MAT) based skeletonization of image patterns using image processing techniques. It discusses how the MAT of an image can be extracted by first computing the Euclidean distance transform of the binary image. Local maxima in the distance transform image correspond to the MAT. Several performance evaluation metrics for analyzing skeletonized images are also introduced, such as connectivity number, thinness measurement and sensitivity. The technique is demonstrated on sample images and results show it can effectively extract the skeleton with good computational speed.
Wavelet Transform based Medical Image Fusion With different fusion methodsIJERA Editor
This paper proposes wavelet transform based image fusion algorithm, after studying the principles and characteristics of the discrete wavelet transform. Medical image fusion used to derive useful information from multimodality medical images. The idea is to improve the image content by fusing images like computer tomography (CT) and magnetic resonance imaging (MRI) images, so as to provide more information to the doctor and clinical treatment planning system. This paper based on the wavelet transformation to fused the medical images. The wavelet based fusion algorithms used on medical images CT and MRI, This involve the fusion with MIN , MAX, MEAN method. Also the result is obtained. With more available multimodality medical images in clinical applications, the idea of combining images from different modalities become very important and medical image fusion has emerged as a new promising research field
The document describes a novel FPGA implementation of an image scaling processor using bilinear interpolation. The proposed method uses sharpening and clamp filters as pre-filters to the bilinear interpolator in order to improve image quality during scaling. The bilinear interpolation algorithm was chosen due to its lower complexity compared to other methods. The design was implemented in Verilog and synthesized for an FPGA, achieving a maximum frequency of 215.64MHz for 64x64 grayscale images. The hardware resources required were moderately lower than other algorithms.
Passive techniques for detection of tampering in images by Surbhi Arora and S...arorasurbhi
This document summarizes research on passive techniques for detecting tampering in digital images. It discusses common types of tampering like copy-paste and describes approaches using rule-based and training-based methods. For rule-based, it evaluates exact match, robust match, and SURF features techniques. For training-based, it trains SVMs on block intensities, DWT/DFT moments, and SURF features. Testing showed the combination of Hu moments and block intensity had highest accuracy. While rule-based is not dependent on training data, training-based can detect more transformations but depends on training data quality and quantity. Future work involves improving rule-based for noise and SURF segmentation and adding more training images
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...sipij
Speech disorders are very complicated in individuals suffering from Apraxia of Speech-AOS. In this paper ,
the pathological cases of speech disabled children affected with AOS are analyzed. The speech signal
samples of children of age between three to eight years are considered for the present study. These speech
signals are digitized and enhanced using the using the Speech Pause Index, Jitter,Skew ,Kurtosis analysis
This analysis is conducted on speech data samples which are concerned with both place of articulation and
manner of articulation. The speech disability of pathological subjects was estimated using results of above
analysis.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
TIME OF ARRIVAL BASED LOCALIZATION IN WIRELESS SENSOR NETWORKS: A LINEAR APPR...sipij
This document describes localization techniques for wireless sensor networks based on time of arrival (TOA) measurements. It introduces four linear localization approaches: Linear Least Squares (LLS), Subspace Approach (SA), Weighted Linear Least Squares (WLLS), and Two-step WLS. It derives the Cramer-Rao Lower Bound (CRLB) for position estimation and compares the mean square position error of the four approaches through simulation. The results show that the Two-step WLS approach achieves the highest localization accuracy.
1) The document proposes a hybrid quantization method for JPEG image compression to overcome limitations in the standard JPEG method.
2) In standard JPEG, only one quantization matrix is used for the entire image, but images have varying frequency contents. The hybrid method selects the quantization matrix based on the frequency content of each image block.
3) Lower quality matrices like Q10-Q40 can improve compression for blocks with more high frequencies, while higher quality matrices like Q50-Q90 can improve image quality by retaining more low frequencies. This provides a better tradeoff between compression ratio and image quality than the standard method.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
Image compression using genetic programmingAhmed Ebid
The fast growth in digital image applications such as web sites, multimedia and even personal image archives encouraged researchers to develop advanced techniques to compress images. Many compression techniques where introduced whether reversible or not. Most of those techniques were based on statistical analysis of repetition or mathematical transforming to reduce the size of the image. This research is concerning in applying Genetic programing (GP) technique in image compression. In order to achieve that goal, a parametric study was carried out to determine the optimum combination of (GP) parameters to achieve maximum quality and compression ratio. For simplicity the study considered 256 level gray scale image. A special C++ software was developed to carry out all calculations, the compressed images was rendered using Microsoft Excel. Study results was compared with JPEG results as one of the most popular lossy compression techniques. It is concluded that using optimum (GP) parameters leads to acceptable quality (objectively and subjectively) corresponding to compression ratio ranged between 2.5 and 4.5.
Image processing techniques in nm 08,09Rutuja Solkar
Digital images in nuclear medicine consist of grids of pixels that represent discrete picture elements. Image processing techniques are used to analyze these images. Key techniques include:
1. Visualizing images by adjusting grayscale, color scale, and windowing.
2. Defining regions and volumes of interest to extract numerical data from tissues.
3. Co-registering images acquired at different times to compare changes in the same subject.
4. Creating time-activity curves from series of frames to analyze radiotracer uptake over time in regions of interest.
5. Smoothing images reduces noise but blurs details, while edge detection and segmentation identify boundaries and classify tissues.
Segmentation of Tumor Region in MRI Images of Brain using Mathematical Morpho...CSCJournals
This paper introduces an efficient detection of brain tumor from cerebral MRI images. The methodology consists of two steps: enhancement and segmentation. To improve the quality of images and limit the risk of distinct regions fusion in the segmentation phase an enhancement process is applied. We applied mathematical morphology to increase the contrast in MRI images and to segment MRI images. Some of experimental results on brain images show the feasibility and the performance of the proposed approach.
Face Recognition Using Neural Network Based Fourier Gabor Filters & Random Pr...CSCJournals
Face detection and recognition has many applications in a variety of fields such as authentication, security, video surveillance and human interaction systems. In this paper, we present a neural network system for face recognition. Feature vector based on Fourier Gabor filters is used as input of our classifier, which is a Back Propagation Neural Network (BPNN). The input vector of the network will have large dimension, to reduce its feature subspace we investigate the use of the Random Projection as method of dimensionality reduction. Theory and experiment indicates the robustness of our solution.
This document summarizes an article that proposes modifications to the JPEG 2000 image compression standard to achieve higher compression ratios while maintaining acceptable error rates. The proposed Adaptive JPEG 2000 technique involves pre-processing images with a transfer function to make them more suitable for compression by JPEG 2000. This is intended to provide higher compression ratios than the original JPEG 2000 standard while keeping root mean square error within allowed thresholds. The document provides background on JPEG 2000 and lossy image compression techniques, describes the proposed pre-processing approach, and indicates it was tested on single-channel images.
This document provides an overview of modern techniques for detecting video forgeries through a literature review. It discusses detecting double MPEG compression, which can identify tampering by analyzing artifacts introduced during recompression. Methods are presented for detecting duplicated frames or regions, extending image forgery detection to videos, combining artifacts across screen shots, and using multimodal feature fusion. Ghost shadow artifacts from video inpainting are also discussed as a technique for detecting forgeries. The literature review assesses these various video forgery detection methods and their applicability to different situations.
Feature Extraction of an Image by Using Adaptive Filtering and Morpological S...IOSR Journals
Abstract: For enhancing an image various enhancement schemes are used which includes gray scale manipulation, filtering and Histogram Equalization, Where Histogram equalization is one of the well known image enhancement technique. It became a popular technique for contrast enhancement because it is simple and effective. The basic idea of Histogram Equalization method is to remap the gray levels of an image. Here using morphological segmentation we can get the segmented image. Morphological reconstruction is used to segment the image. Comparative analysis of different enhancement and segmentation will be carried out. This comparison will be done on the basis of subjective and objective parameters. Subjective parameter is visual quality and objective parameters are Area, Perimeter, Min and Max intensity, Avg Voxel Intensity, Std Dev of Intensity, Eccentricity, Coefficient of skewness, Coefficient of Kurtosis, Median intensity, Mode intensity. Keywords: Histogram Equalization, Segmentation, Morphological Reconstruction .
Nowadays, image manipulation is common due to the availability of image processing software, such as Adobe Photoshop or GIMP. The original image captured by digital camera or smartphone normally is saved in the JPEG format due to its popularity. JPEG algorithm works on image grids, compressed independently, having size of 8x8 pixels. For unmodified image, all 8x8 grids should have a similar error level. For resaving operation, each block should degrade at approximately the same rate due to the introduction of similar amount of errors across the entire image. For modified image, the altered blocks should have higher error potential compred to the remaining part of the image. The objective of this paper is to develop a photo forensics algorithm which can detect any photo manipulation. The error level analysis (ELA) was further enhanced using vertical and horizontal histograms of ELA image to pinpoint the exact location of modification. Results showed that our proposed algorithm could identify successfully the modified image as well as showing the exact location of modifications.
Efficient Image Compression Technique using JPEG2000 with Adaptive ThresholdCSCJournals
Image compression is a technique to reduce the size of image which is helpful for transforms. Due to the limited communication bandwidth we have to need optimum compressed image with good visual quality. Although the JPEG2000 compression technique is ideal for image processing as it uses DWT (Discrete Wavelet Transform).But in this paper we proposed fast and efficient image compression scheme using JPEG2000 technique with adaptive subband threshold. Actually we used subband adaptive threshold in decomposition section which gives us more compression ratio and good visual quality other than existing compression techniques. The subband adaptive threshold that concentrates on denoising each subband (except lowest coefficient subbands) by minimizing insignificant coefficients and adapt with modified coefficients which are significant and more responsible for image reconstruction. Finally we use embedded block coding with optimized truncation (EBCOT) entropy coder that gives three different passes which gives more compressed image. This proposed method is compared to other existing approach and give superior result that satisfy the human visual quality and also these resulting compressed images are evaluated by the performance parameter PSNR.
1. The document proposes a Modified Fuzzy C-Means (MFCM) algorithm to segment brain tumors in noisy MRI images.
2. The conventional Fuzzy C-Means algorithm is sensitive to noise, so the MFCM adds an adaptive filtering step during segmentation.
3. The MFCM incorporates neighboring pixel membership values to reduce each pixel's resistance to being clustered, improving segmentation in noisy images.
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
code with ressult
ABSTRACT
Method of compression which is Huffman coding based on histogram information and image segmentation. It is used for lossless and lossy compression. Theamount of image will be compressed in lossy manner, and in lossless manner, depends on theinformation obtained by the histogram of the image. The results show that the difference betweenoriginal and compressed images is visually negligible. The compression ratio(CR) and peak signal tonoise ratio(PSNR) are obtained for different images. The relation between compression ratio and peaksignal to noise ratio shows that whenever we increase compression ratio we get PSNR high. We can alsoobtain minimum mean square error. It shows that if we get high PSNR than our image quality is better.
This document presents a study on medial axis transformation (MAT) based skeletonization of image patterns using image processing techniques. It discusses how the MAT of an image can be extracted by first computing the Euclidean distance transform of the binary image. Local maxima in the distance transform image correspond to the MAT. Several performance evaluation metrics for analyzing skeletonized images are also introduced, such as connectivity number, thinness measurement and sensitivity. The technique is demonstrated on sample images and results show it can effectively extract the skeleton with good computational speed.
Wavelet Transform based Medical Image Fusion With different fusion methodsIJERA Editor
This paper proposes wavelet transform based image fusion algorithm, after studying the principles and characteristics of the discrete wavelet transform. Medical image fusion used to derive useful information from multimodality medical images. The idea is to improve the image content by fusing images like computer tomography (CT) and magnetic resonance imaging (MRI) images, so as to provide more information to the doctor and clinical treatment planning system. This paper based on the wavelet transformation to fused the medical images. The wavelet based fusion algorithms used on medical images CT and MRI, This involve the fusion with MIN , MAX, MEAN method. Also the result is obtained. With more available multimodality medical images in clinical applications, the idea of combining images from different modalities become very important and medical image fusion has emerged as a new promising research field
The document describes a novel FPGA implementation of an image scaling processor using bilinear interpolation. The proposed method uses sharpening and clamp filters as pre-filters to the bilinear interpolator in order to improve image quality during scaling. The bilinear interpolation algorithm was chosen due to its lower complexity compared to other methods. The design was implemented in Verilog and synthesized for an FPGA, achieving a maximum frequency of 215.64MHz for 64x64 grayscale images. The hardware resources required were moderately lower than other algorithms.
Passive techniques for detection of tampering in images by Surbhi Arora and S...arorasurbhi
This document summarizes research on passive techniques for detecting tampering in digital images. It discusses common types of tampering like copy-paste and describes approaches using rule-based and training-based methods. For rule-based, it evaluates exact match, robust match, and SURF features techniques. For training-based, it trains SVMs on block intensities, DWT/DFT moments, and SURF features. Testing showed the combination of Hu moments and block intensity had highest accuracy. While rule-based is not dependent on training data, training-based can detect more transformations but depends on training data quality and quantity. Future work involves improving rule-based for noise and SURF segmentation and adding more training images
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...sipij
Speech disorders are very complicated in individuals suffering from Apraxia of Speech-AOS. In this paper ,
the pathological cases of speech disabled children affected with AOS are analyzed. The speech signal
samples of children of age between three to eight years are considered for the present study. These speech
signals are digitized and enhanced using the using the Speech Pause Index, Jitter,Skew ,Kurtosis analysis
This analysis is conducted on speech data samples which are concerned with both place of articulation and
manner of articulation. The speech disability of pathological subjects was estimated using results of above
analysis.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
TIME OF ARRIVAL BASED LOCALIZATION IN WIRELESS SENSOR NETWORKS: A LINEAR APPR...sipij
This document describes localization techniques for wireless sensor networks based on time of arrival (TOA) measurements. It introduces four linear localization approaches: Linear Least Squares (LLS), Subspace Approach (SA), Weighted Linear Least Squares (WLLS), and Two-step WLS. It derives the Cramer-Rao Lower Bound (CRLB) for position estimation and compares the mean square position error of the four approaches through simulation. The results show that the Two-step WLS approach achieves the highest localization accuracy.
ONTOLOGICAL MODEL FOR CHARACTER RECOGNITION BASED ON SPATIAL RELATIONSsipij
In this paper, we present a set of spatial relations between concepts describing an ontological model for a
new process of character recognition. Our main idea is based on the construction of the domain ontology
modelling the Latin script. This ontology is composed by a set of concepts and a set of relations. The
concepts represent the graphemes extracted by segmenting the manipulated document and the relations are
of two types, is-a relations and spatial relations. In this paper we are interested by description of second
type of relations and their implementation by java code.
A HYBRID FILTERING TECHNIQUE FOR ELIMINATING UNIFORM NOISE AND IMPULSE NOIS...sipij
A new hybrid filtering technique is proposed to improving denoising process on digital images.
This technique is performed in two steps. In the first step, uniform noise and impulse noise is
eliminated using decision based algorithm (DBA). Image denoising process is further improved
by an appropriately combining DBA with Adaptive Neuro Fuzzy Inference System (ANFIS) at
the removal of uniform noise and impulse noise on the digital images. Three well known images
are selected for training and the internal parameters of the neuro-fuzzy network are adaptively
optimized by training. This technique offers excellent line, edge, and fine detail preservation
performance while, at the same time, effectively denoising digital images. Extensive simulation
results were realized for ANFIS network and different filters are compared. Results show that
the proposed filter is superior performance in terms of image denoising and edges and fine
details preservation properties.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
SkinCure: An Innovative Smart Phone Based Application to Assist in Melanoma E...sipij
Melanoma spreads through metastasis, and therefore it has been proven to be very fatal. Statistical
evidence has revealed that the majority of deaths resulting from skin cancer are as a result of melanoma.
Further investigations have shown that the survival rates in patients depend on the stage of the infection;
early detection and intervention of melanoma implicates higher chances of cure. Clinical diagnosis and
prognosis of melanoma is challenging since the processes are prone to misdiagnosis and inaccuracies due
to doctors’ subjectivity. This paper proposes an innovative and fully functional smart-phone based
application to assist in melanoma early detection and prevention. The application has two major
components; the first component is a real-time alert to help users prevent skin burn caused by sunlight; a
novel equation to compute the time for skin to burn is thereby introduced. The second component is an
automated image analysis module which contains image acquisition, hair detection and exclusion, lesion
segmentation, feature extraction, and classification. The proposed system exploits PH2 Dermoscopy image
database from Pedro Hispano Hospital for development and testing purposes. The image database
contains a total of 200 dermoscopy images of lesions, including normal, atypical, and melanoma cases.
The experimental results show that the proposed system is efficient, achieving classification of the normal,
atypical and melanoma images with accuracy of 96.3%, 95.7% and 97.5%, respectively.
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATIONsipij
There is considerable rise in the research of iris recognition system over a period of time. Most of the
researchers has been focused on the development of new iris pre-processing and recognition algorithms for
good quail iris images. In this paper, iris recognition system using Haar wavelet packet is presented.
Wavelet Packet Transform (WPT ) which is extension of discrete wavelet transform has multi-resolution
approach. In this iris information is encoded based on energy of wavelet packets.. Our proposed work
significantly decreases the error rate in recognition of noisy images. A comparison of this work with nonorthogonal Gabor wavelets method is done. Computational complexity of our work is also less as
compared to Gabor wavelets method.
Advances in automatic tuberculosis detection in chest x ray imagessipij
Tuberculosis (TB) is very dangerous and rapidly spread disease in the world. In the investigating cases for
suspected tuberculosis (TB), chest radiography is not only the key techniques of diagnosis based on the
medical imaging but also the diagnostic radiology. So, Computer aided diagnosis (CAD) has been popular
and many researchers are interested in this research areas and different approaches have been proposed
for the TB detection and lung decease classification. In this paper, the medical background history of TB
decease in chest X-rays and a survey of the various approaches in TB detection and classification are
presented. The literature in the related methods is surveyed papers in this research area until now 2014.
n this paper, we present feature-based technique f
or construction of mosaic image from underwater vid
eo
sequence, which suffers from parallax distortion du
e to propagation properties of light in the underwa
ter
environment. The most of the available mosaic tools
and underwater image mosaicing techniques yields
final result with some artifacts such as blurring,
ghosting and seam due to presence of parallax in th
e input
images. The removal of parallax from input images m
ay not reduce its effects instead it must be correc
ted
in successive steps of mosaicing. Thus, our approac
h minimizes the parallax effects by adopting an eff
icient
local alignment technique after global registration
. We extract texture features using Centre Symmetri
c
Local Binary Pattern (CS-LBP) descriptor in order t
o find feature correspondences, which are used furt
her
for estimation of homography through RANSAC. In ord
er to increase the accuracy of global registration,
we perform preprocessing such as colour alignment b
etween two selected frames based on colour
distribution adjustment. Because of existence of 1
00% overlap in consecutive frames of underwater vid
eo,
we select frames with minimum overlap based on mutu
al offset in order to reduce the computation cost
during mosaicing. Our approach minimizes the parall
ax effects considerably in final mosaic constructed
using our own underwater video sequences.
Image compression using embedded zero tree waveletsipij
Compressing an image is significantly different than compressing raw binary data. compressing images is
used by this different compression algorithm. Wavelet transforms used in Image compression methods to
provide high compression rates while maintaining good image quality. Discrete Wavelet Transform (DWT)
is one of the most common methods used in signal and image compression .It is very powerful compared to
other transform because its ability to represent any type of signals both in time and frequency domain
simultaneously. In this paper, we will moot the use of Wavelet Based Image compression algorithm-
Embedded Zerotree Wavelet (EZW). We will obtain a bit stream with increasing accuracy from ezw
algorithm because of basing on progressive encoding to compress an image into . All the numerical results
were done by using matlab coding and the numerical analysis of this algorithm is carried out by sizing
Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR) for standard Lena Image .Experimental
results beam that the method is fast, robust and efficient enough to implement it in still and complex images
with significant image compression.
Improve Captcha's Security Using Gaussian Blur Filtersipij
Providing security for webservers against unwanted and automated registrations has become a big
concern. To prevent these kinds of false registrations many websites use CAPTCHAs. Among all kinds of
CAPTCHAs OCR-Based or visual CAPTCHAs are very common. Actually visual CAPTCHA is an image
containing a sequence of characters. So far most of visual CAPTCHAs, in order to resist against OCR
programs, use some common implementations such as wrapping the characters, random placement and
rotations of characters, etc. In this paper we applied Gaussian Blur filter, which is an image
transformation, to visual CAPTCHAs to reduce their readability by OCR programs. We concluded that this
technique made CAPTCHAs almost unreadable for OCR programs but, their readability by human users
still remained high.
S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...sipij
OCT is a recently developed optical interferometric
technique for non-invasive diagnostic medical imag
ing
in vivo; the most sensitive optical imaging modalit
y.OCT finds its application in ophthalmology, blood
flow
estimation and cancer diagnosis along with many non
biomedical applications. The main advantage of
OCT is its high resolution which is in
μ
m range and depth of penetration in mm range. Unlik
e other
techniques like X rays and CT scan, OCT does not co
mprise any x ray source and therefore no radiations
are involved. This research work discusses the basi
cs of spectral domain OCT (SD-OCT), experimental
setup, data acquisition and signal processing invol
ved in OCT systems. Simulation of OCT involving
modelling and signal processing, carried out on Lab
VIEW platform has been discussed. Using the
experimental setup, some of the non biomedical samp
les have been scanned. The signal processing and
image processing of the scanned data was carried ou
t in MATLAB and Lab VIEW, some of the results thus
obtained have been discussed in the end
Analog signal processing approach for coarse and fine depth estimationsipij
This document discusses an analog signal processing approach for coarse and fine depth estimation using stereo image pairs. It proposes modifications to existing normalized cross correlation (NCC) and sum absolute differences (SAD) stereo correspondence algorithms to reduce computation time. For the NCC algorithm, it suggests using only the diagonal elements of image blocks to compute correlation, reducing computations from 2D to 1D. For hardware implementation, it presents a new imaging architecture with parallel analog and digital systems, where the analog system performs the computationally intensive NCC algorithm on sensor data in real-time to reduce overall processing time compared to digital-only systems. Experimental results show the modified algorithms can achieve faster computation speeds without compromising performance.
VIDEO QUALITY ASSESSMENT USING LAPLACIAN MODELING OF MOTION VECTOR DISTRIBUTI...sipij
Video/Image quality assessment (VQA/IQA) is fundamental in various fields of video/image processing.
VQA reflects the quality of a video as most people commonly perceive. This paper proposes a reducedreference
mobile VQA, in which one-dimensional (1-D) motion vector (MV) distributions are used as
features of videos. This paper focuses on reduction of data size using Laplacian modeling of MV
distributions because network resource is restricted in the case of mobile video. The proposed method is
more efficient than the conventional methods in view of the computation time, because the proposed quality
metric decodes MVs directly from video stream in the parsing process rather than reconstructing the
distorted video at a receiver. Moreover, in view of data size, the proposed method is efficient because a
sender transmits only 28 parameters. We adopt the Laplacian distribution for modeling 1-D MV
histograms. 1-D MV histograms accumulated over the whole video sequences are used, which is different
from the conventional methods that assess each image frame independently. For testing the similarity
between MV histogram of reference and distorted videos and for minimizing the fitting error in Laplacian
modeling process, we use the chi-square method. To show the effectiveness of our proposed method, we
compare the proposed method with the conventional methods with coded video clips, which are coded
under varying bit rate, image size, and frame rate by H.263 and H.264/AVC. Experimental results show
that the proposed method gives the performance comparable with the conventional methods, especially, the
proposed method requires much lower transmission data.
EEG S IGNAL Q UANTIFICATION B ASED ON M ODUL L EVELS sipij
This article proposes a contribution to quantify EE
G signals outline. This technique uses two tools fo
r EEG
signal characteristics extraction. Our tests were r
ealized on the basis of 32 canals EEG canals using
Neuroscan software. EEG example demonstration is re
ferenced CZ and is sampled at 1000HZ. The
principal aim of this technique is to reduce the im
portant volume of EEG signal data Without losing an
y
information. EEG signals are quantified on the basi
s of a whole predefined levels The obtained results
show that an EEG alignment can be posted in a quant
ified form.
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
Hybrid lwt svd watermarking optimized using metaheuristic algorithms along wi...sipij
Medical image security provides challenges and opportunities, watermarking and encryption of medical
images provides the necessary control over the flow of medical information. In this paper a dual security approach is employed .A medical image is considered as watermark and is watermarked inside a natural image. This approach is to wean way the potential attacker by disguising the medical image as a natural image. To further enhance the security the watermarked image is encrypted using encryption algorithms. In this paper a multi–objective optimization approach optimized using different metaheuristic approaches like Genetic Algorithm (GA), Differential Evolution ( DE) and Bacterial Foraging Optimization (BFOA) is proposed. Such optimization helps in preserving the structural integrity of the medical images, which is of utmost importance. The water marking is proposed to be implemented using both Lifted Wavelet
Transforms (LWT) and Singular Value Decomposition (SVD) technique. The encryption is done using RSA
and AES encryption algorithms. A Graphical User Interface (GUI) which enables the user to have ease of
operation in loading the image, watermark it, encrypt it and also retrieve the original image whenever necessary is also designed and presented in this paper.
NOVEL ALGORITHM FOR SKIN COLOR BASED SEGMENTATION USING MIXTURE OF GMMSsipij
1) The document proposes a novel algorithm for skin color segmentation using a mixture of Gaussian mixture models (GMMs). It models four common color spaces (RGB, HSV, YCbCr, L*a*b*) each with a single GMM.
2) These GMM models are then combined into a single superior model called a mixture of GMMs (MiGMM) by assigning a weight to each GMM based on its classification rate.
3) The algorithm is evaluated on 100 test images using three metrics: correct detection rate, false detection rate, and classification rate, achieving higher recognition rates than single GMM models of individual color spaces.
Matlab Implementation of Baseline JPEG Image Compression Using Hardware Optim...inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Presentation given in the Seminar of B.Tech 6th Semester during session 2009-10 By Paramjeet Singh Jamwal, Poonam Kanyal, Rittitka Mittal and Surabhi Tyagi.
The intention of image compression is to discard worthless data from image so as to shrink the quantity of data bits favored for image depiction, to lessen the storage space, broadcast bandwidth and time. Likewise, data hiding convenes scenarios by implanting the unfamiliar data into a picture in invisibility manner. The review offers, a method of image compression approaches by using DWT transform employing steganography scheme together in combination of SPIHT to compress an image.
image processing for jpeg presentati.pptnaghamallella
JPEG is a lossy image compression format that works best on continuous-tone images. It uses discrete cosine transform, quantization, zigzag scanning, differential pulse-code modulation, run-length encoding, and Huffman encoding to achieve high compression ratios while maintaining good image quality. Key aspects of JPEG include adjustable compression levels, support for grayscale and color images, and sequential, progressive, lossless, and hierarchical decoding modes.
This document discusses JPEG steganography techniques. It covers the steps involved in JPEG compression including preprocessing, transformation using discrete cosine transform, quantization, and encoding. It then discusses how information can be hidden in JPEG images using least significant bit steganography by modifying the least significant bit of each pixel. Popular steganography tools for JPEG images are also listed. Methods for detecting hidden information through steganalysis of JPEG images are outlined, including using chi-square attacks and other statistical analyses.
Medical images compression: JPEG variations for DICOM standardJose Pinilla
This is a report that introduces the technical features of the different image compression schemes found in the DICOM standar for medical imaging archiving and communication.
This presentation is about JPEG compression algorithm. It briefly describes all the underlying steps in JPEG compression like picture preparation, DCT, Quantization, Rendering and Encoding.
The document introduces JPEG and MPEG standards for image and video compression. JPEG uses DCT, quantization and entropy coding on 8x8 pixel blocks to remove spatial redundancy in images. MPEG builds on JPEG and additionally removes temporal redundancy between video frames using motion compensation in interframe coding of P and B frames. MPEG-1 was designed for video at 1.5Mbps while MPEG-2 supports digital TV and DVD with rates over 4Mbps. Later MPEG standards provide more capabilities for multimedia delivery and interaction.
Comparison of different Fingerprint Compression Techniquessipij
The important features of wavelet transform and different methods in compression of fingerprint images have been implemented. Image quality is measured objectively using peak signal to noise ratio (PSNR) and mean square error (MSE).A comparative study using discrete cosine transform based Joint Photographic Experts Group(JPEG) standard , wavelet based basic Set Partitioning in Hierarchical trees(SPIHT) and Modified SPIHT is done. The comparison shows that Modified SPIHT offers better compression than basic SPIHT and JPEG. The results will help application developers to choose a good wavelet compression system for their applications.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
1) The document proposes a new simple method to reduce visual block artifacts in images compressed using DCT (used in JPEG) for urban surveillance systems.
2) The method smooths only the connection edges between adjacent blocks while keeping other image areas unchanged.
3) Simulation results show the proposed method achieves better image quality as measured by PSNR compared to median and wiener filters, while using significantly less computational resources.
This document summarizes a research paper that proposes a technique to enhance JPEG image compression by reducing image size. The technique modifies the conventional run length coding used in JPEG entropy encoding. Instead of tracking runs of zeros, it tracks the exact locations and values of non-zero elements in the quantized DCT coefficient matrix. For consecutive non-zero elements, it only stores the location of the first element. The technique was tested on various images in MATLAB and proved more efficient than conventional run length coding in all cases, reducing the compressed image size.
To store and transmit digital images in least memory space and bandwidth image compression is needed. Image compression refers to the process of minimizing the image size by removing redundant data bits in a manner that quality of an image should not be degrade. Hence image compression reduces quantity of the image size without reducing its quality. In this paper it is being attempted to enhance the basic JPEG compression by reducing image size. The proposed technique is about amendment of the conventional run length coding for JPEG (Joint Photographic Experts Group) image compression by using the concept of sparse matrix. In this algorithm, the redundant data has been completely eliminated and hence leaving the quality of an image unaltered. The JPEG standard document specifies three steps: Discrete cosine transform, Quantization followed by Entropy coding. The proposed work aims at the enhancement of the third step which is Entropy coding.
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXijcsit
To store and transmit digital images in least memory space and bandwidth image compression is needed. Image compression refers to the process of minimizing the image size by removing redundant data bits in a manner that quality of an image should not be degrade. Hence image compression reduces quantity of the image size without reducing its quality. In this paper it is being attempted to enhance the basic JPEG compression by reducing image size. The proposed technique is about amendment of the conventional run length coding for JPEG (Joint Photographic Experts Group) image compression by using the concept of sparse matrix. In this algorithm, the redundant data has been completely eliminated and hence leaving the quality of an image unaltered. The JPEG standard document specifies three steps: Discrete cosine transform, Quantization followed by Entropy coding. The proposed work aims at the enhancement of the third step which is Entropy coding.
This document discusses the JPEG image compression standard. It begins with an overview of what JPEG is, including that it is an international standard for compressing color and grayscale images up to 24 bits per pixel. The document then discusses the basic JPEG compression pipeline of encoding and decoding. It also outlines some of the major algorithms used in JPEG compression, including color space transformation, discrete cosine transform (DCT), quantization, zigzag scanning, and entropy coding. A key component discussed is the DCT, which converts image data into frequency domains and is useful for energy compaction in compression. The document concludes with noting implementations of JPEG and DCT in fields like image processing, scientific analysis, and audio processing.
DCT based Steganographic Evaluation parameter analysis in Frequency domain by...IOSR Journals
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve evaluation parameters like PSNR, mean square error, and capacity. The authors propose modifying the default 8x8 quantization table by adjusting frequency values in 4 bands to increase image quality for the embedded stego image. Experimental results on test images show that using the modified table improves PSNR, decreases mean square error, and increases maximum embedding capacity compared to the default table. Therefore, the proposed method allows more secret data to be hidden with less distortion and improved image quality.
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve embedding capacity and image quality. The authors propose modifying the default 8x8 quantization table by changing frequency values to increase the peak signal-to-noise ratio and capacity while decreasing the mean square error of embedded images. Experimental results on test images show increased capacity, PSNR and reduced error when using the modified versus default table, indicating improved stego image quality. The proposed method aims to securely embed more data with less distortion than traditional DCT-based steganography.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
Similar to IMAGE COMPRESSION BY EMBEDDING FIVE MODULUS METHOD INTO JPEG (20)
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
AppSec PNW: Android and iOS Application Security with MobSF
IMAGE COMPRESSION BY EMBEDDING FIVE MODULUS METHOD INTO JPEG
1. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
DOI : 10.5121/sipij.2013.4203 31
IMAGE COMPRESSION BY EMBEDDING FIVE
MODULUS METHOD INTO JPEG
Firas A. Jassim
Management Information Systems Department,
Irbid National University, Irbid 2600, Jordan
Firasajil@yahoo.com
ABSTRACT
The standard JPEG format is almost the optimum format in image compression. The compression ratio in
JPEG sometimes reaches 30:1. The compression ratio of JPEG could be increased by embedding the Five
Modulus Method (FMM) into the JPEG algorithm. The novel algorithm gives twice the time as the standard
JPEG algorithm or more. The novel algorithm was called FJPEG (Five-JPEG). The quality of the
reconstructed image after compression is approximately approaches the JPEG. Standard test images have
been used to support and implement the suggested idea in this paper and the error metrics have been
computed and compared with JPEG.
KEYWORDS
Image compression, JPEG, DCT, FMM, FJPEG.
1. INTRODUCTION
The main goal of image compression methods is to represent the original images with fewer bits.
Recently, image compression is very popular in many research areas. According to the research
area, one of the two types of compression, which are lossless and lossy, can be used [13].
Lossless compression can retrieve the original image after reconstruction. Since it is impossible to
compress the image with high compression ratio without errors, therefore; lossy image
compression was used to obtain high compression ratios. Consequently, reducing image size with
lossy image compression gives much more convenient ratio than lossless image compression [7]
[9].
Since the mid of the eighties of the last century, the International Telecommunication Union
(ITU) and the International Organization for Standardization (ISO) have been working together to
obtain a standard compression image extension for still images. The recommendation ISO DIS
10918-1 known as JPEG Joint Photographic Experts Group [11]. Digital Compression and
Coding of Continuous-tone Still Images and also ITU-T Recommendation T.81 [11]. After
comparing many coding schemes for image compression, the JPEG members selected a Discrete
Cosine Transform (DCT). JPEG became a Draft International Standard (DIS) in 1991 and an
International Standard (IS) in 1992 [12]. JPEG has become an international standard for lossy
compression of digital image.
2. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
32
2. FIVE MODULUS METHOD
Five modulus method (FMM) was first introduced by [6]. The main concept of this method is to
convert the value of each pixel into multiples of five. This conversion omits parts of the signal
that will not be noticed by the signal receiver namely the Human Visual System (HVS). Since the
neighbouring pixels are correlated in image matrix, therefore; finding less correlated
representation of image is one of the most important tasks. The main principle of image
compression states that the neighbours of a pixel tend to have the same immediate neighbours [4].
Hence, the FMM technique tends to divide image into 8×8 blocks. After that, each pixel in every
block can be transformed into a number divisible by 5. The effectiveness of this transformation
will not be noticed by the Human Visual System (HVS) [6]. Therefore, each pixel value is from
the multiples of 5 only, i.e. 0, 5, 10, 15, 20, … , 255. The FMM algorithm could be stated as:
if Pixel value Mod 5 = 4
Pixel value=Pixel value+1
if Pixel value Mod 5 = 3
Pixel value=Pixel value+2
if Pixel value Mod 5 = 2
Pixel value=Pixel value-2
if Pixel value Mod 5 = 1
Pixel value=Pixel value-1
Now, to illustrate the method of Five Modulus Method (FMM). An arbitrary 8×8 block has been
taken randomly from an arbitrary digital image and showed in table 1.
Table 1. Original 8×8 block
106 98 104 102 109 110 107 113
103 107 104 110 109 110 110 113
106 106 105 110 111 107 104 108
104 105 110 111 109 108 110 104
106 106 119 113 111 107 109 108
106 104 101 105 104 104 107 113
97 103 104 101 102 104 106 110
103 106 110 105 103 105 103 108
After that, the FMM algorithm shown earlier may be applied to the 8×8 block in table (1). Every
pixel value was converted into multiple of five, i.e. the first pixel which is (106) may be
converted into (105), etc. Therefore, the new resulting 8×8 block was showed in table (2).
Table 2. Converting 8×8 block by five modulus method (FMM)
105 100 105 100 110 110 105 115
105 105 105 110 110 110 110 115
105 105 105 110 110 105 105 110
105 105 110 110 110 110 110 105
105 105 120 115 110 105 110 110
105 105 100 105 105 105 105 115
95 105 105 100 100 105 105 110
105 105 110 105 105 105 105 110
3. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
33
Now, to complete the FFM method, the 8×8 block shown in table (2) may be divided by 5 to
reduce the pixel values into a lesser values. Therefore, the first converted pixel (105) would
be (105/5=21), etc. The evaluated 8×8 block after division is shown in table (3).
Table 3. Dividing 8×8 block in table (1) by 5
21 20 21 20 22 22 21 23
21 21 21 22 22 22 22 23
21 21 21 22 22 21 21 22
21 21 22 22 22 22 22 21
21 21 24 23 22 21 22 22
21 21 20 21 21 21 21 23
19 21 21 20 20 21 21 22
21 21 22 21 21 21 21 22
The main concept of the FMM method is to reduce the dispersion (variation) between pixel
values in the same 8×8 block. Hence, the standard deviation in the original 8×8 block was
(3.84) while it was (0.85) in the transformed 8×8 block. This implies that, the storage space
for the transformed 8×8 block will be less than that of the original 8×8 block.
3. FJPEG ENCODING AND DECODING
The basic technique in FJPEG encoding is to apply FMM first. Each 8×8 block is transformed
into multiples of 5, i.e. Five Modulus Method (FMM), see table (2). After that, dividing the
whole 8×8 block by 5 to obtain new pixel values range [0..51] which are the results of
dividing [0..255] by 5. Now, the 8×8 block is ready to implement the standard JPEG
exactly starting with DCT and so on. The new file format if FJPEG which is the same as
JPEG but all its values are multiples of 5. Actually, the FJPEG image format could be sent
over the internet or stored in the storage media, or what ever else, with a lesser file size as a
compressed file. On the other hand, at the decoding side, when reconstructing the image,
firstly, the IFMM (Inverse Five Modulus Method) could be applied by multiplying the 8×8
block by 5 to retrieve, approximately, the original 8×8 block. After that, the same JPEG
decoding may be applied as it is. Therefore, the main contribution is this article is to embed
FMM method into JPEG to reduce the file size. The whole encoding and decoding
procedures may be shown in figure (1).
One of the main advantages in FJPEG compared to JPEG is that the reduction in file size is
noticeable. Unfortunately, one of the main disadvantages is that on the decoding side and before
applying JPEG decoding the image must be multiplied by 5. Theses calculations will be evaluated
on the computer processor at the decoding side. Actually, these calculations will surely offtake
some time and space from the memory and CPU.
4. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
34
Figure 1. Encoding and Decoding procedures in FJPEG
4. DISCRETE COSINE TRANSFORM
Discrete Cosine Transformation (DCT), is used to compress digital images by reducing the
number of pixels used to express 8x8 blocks into a lesser number of pixels. Nowadays, JPEG
standard uses the DCT as its essentials. This type of lossy encoding has become the most popular
transform for image compression especially in JPEG image format. The origin of the DCT back
to 1974 by [1]. The DCT algorithm is completely invertible which makes it useful for both
lossless and lossy image compression. The DCT transform the pixel values of the 8×8 block into
two types. The first type is the highest values that can be positioned at the upper left corner of the
8×8 block. While the second type represents the values with the smaller value which can be
found at the remaining areas of the 8×8 block. The coefficients of the DCT can be used to
reconstruct the original 8×8 block through the Inverse Discrete Cosine Transform (IDCT) which
can be used to retrieve the image from its original representation [7].
Images can be separated by DCT into segments of frequencies where less important frequencies
are omitted through quantization method and the important frequencies are used to reconstruct the
image through decoding technique [8]. The main two advantages for embedding FMM into
JPEG are:
• Reducing the dispersion (variation) degree between DCT coefficients.
• Decreasing the number of the non-zero elements of the DCT coefficients.
According to [3], the measure DCT-STD (STD is the standard deviation) have been used to
measure the clarity of am image. Here, the DCT-STD have been used to measure the difference
of the DCT of the 8x8 block taken from the traditional JPEG image with 8x8 block taken from
FJPEG image, tables 4 and 5 respectively.
5. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
35
Table 4. DCT transform of the original block
853 -10 -2 -6 2 0 2 0
7 -3 -2 6 4 0 0 0
-8 -5 6 1 0 -4 3 -1
0 -5 5 0 1 0 2 2
4 4 -4 -1 -3 3 5 5
-8 -3 1 3 0 0 0 1
-1 2 3 3 5 -1 1 0
1 2 -3 -2 0 -1 3 3
Table 5. DCT transform of the FMM block
170 -2 0 -1 0 0 0 0
1 0 0 1 1 0 0 0
-1 -1 1 0 0 -1 0 0
0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 1
-1 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 -1 0 0 0 0
The value of the standard deviation for table (4) is 106.65, while the standard deviation for table
(5) is 21.26. Actually, when the standard deviation value is high, it means that the difference
between most values and their mean values are high; and vice versa. Hence, coefficients of table
(4) have high variation than table (5). It means that, the dispersion obtained by the transformed
FMM is less than the traditional dispersion of the original DCT. Also, it can be seen that the
number of non-zero elements of table (2) are lesser than of those of table (1). According to [5],
every zero element of the DCT matrix saves operations and reduces the time complexity. Also,
the total processing time required for IDCT is decreased when the number of the non-zero
elements of the DCT coefficient matrix is less [10]. Actually, in image compression, few non-
zero elements of the DCT coefficients could be used to represent an image [2].
Fortunately, the previous mentioned opinions decants in the behalf of the proposed idea in
this article which is that the reduction of the non-zero elements of the DCT matrix will
reduce the time complexity. Hence, using DCT obtained by the FMM method will gives
sophisticated results than using the traditional DCT that contains more non-zero elements.
5. EXPERIMENTAL RESULTS
Actually, the proposed FJPEG in this article has been implemented to a variety of standard test
images with different spatial and frequency characteristics. As mentioned earlier, the embedding
of FMM into JPEG will formulate FJPEG which has a noticeable difference in image size.
Experimentally, the compression ratio between JPEG and FJPEG has been showed in table (6).
Also, PSNR between JPEG and FJPEG was calculated in the same table.
6. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
36
Table 6. File size, CR and PSNR for test images
File Size (KB) CR PSNR
JPEG FJPEG JPEG FJPEG JPEG FJPEG
Lena 36.9 13.5 20.8:1 56.9:1 37.4882 32.3178
Baboon 75.6 27.6 10.2:1 28.8:1 30.7430 25.0817
Peppers 40.5 14.3 19.0:1 53.7:1 35.5778 31.6617
F16 26 15 29.5:1 51.2:1 Inf Inf
Bird 131 27 7.7:1 37.5:1 53.4068 30.4266
ZigZag 64.2 23.5 5.6:1 15.2:1 31.0531 20.1237
Houses 66.8 24.2 8.9:1 24.5:1 Inf 21.9970
Mosque 23.9 8.33 14.7:1 42.1:1 36.1242 28.6154
Figure 2. Variety of test images
As an example, three test images (Lena, baboon, and Peppers) have been used, figures 3,4, and 5,
to show the visual and storage differences between (a) standard JPEG (b) FJPEG after encoding
directly, i.e. this file will be sent over the internet and stored in the storage media which has a
small file size compared to JPEG (c) FJPEG after reconstruction, i.e. multiplied by 5 on the
decoding side.
As seen from figures 3,4, and 5, that the differences by the human eye are worthless and can not
be recognized visually. As a quantitative measure, the PSNR has been used to compute the signal-
to-noise-ratio for the original JPEG and the FJPEG and the result are presented in table (6). The
differences between PSNR for the two images are reasonable. As an example, for Lena image the
PSNR of the JPEG was (37.4882) while the PSNR for the FJPEG was (32.3178) which
is not very high difference. Therefore, one could accept this fair difference by taking
into consideration the high difference in file size (36.9 KB) for JPEG while (13.5
KB) for the FJPEG.
7. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
37
(a) (b) (c)
Figure 3. (a) JPEG (36.9 KB) (b) FJPEG (sent) (13.5 KB) (c) FJPEG×5 (reconstructed) (24 KB)
(a) (b) (c)
Figure 4. (a) JPEG (75.6 KB) (b) FJPEG (sent) (26.7 KB) (c) FJPEG×5 (reconstructed) (50.6 KB)
(a) (b) (c)
Figure 5. (a) JPEG (40.5 KB) (b) FJPEG (sent) (14.3 KB) (c) FJPEG×5 (reconstructed) (25.9 KB)
The main difference in compression ratio between FJPEG and JPEG may be illustrated
graphically in figure (6).
8. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
38
0
10
20
30
40
50
60
Lena
Baboon
Peppers
F16
Bird
ZigZag
Houses
Mosque
CR(x:1)
JPEG
FJPEG
Figure 6. Difference in compression ratio between JPEG and FJPEG
6. CONCLUSIONS
The method of FJPEG stated in this article can be used as a redundancy alternative of the
traditional JPEG in most of the disciplines of science and engineering especially in image
compression because of its noticeable difference in image size or the compression ratio. As a first
recommendation, the research area of the CPU time and space taken to reconstruct the FJPEG at
the encoding side needs a lot of interest. As a second recommendation, the implementation of
FJPEG in video compression needs to be discussed and researched. As a third recommendation,
the alternative name of JPEG method in audio is called MP3, therefore; using the FJPEG in audio
compression could be researched.
REFERENCES
[1] Ahmed, N., T. Natarajan, and R. K. Rao (1974) “Discrete Cosine Transform,” IEEE Transactions on
Computers, C-23:90–93.
[2] Aruna Bhadu, Vijay Kumar, Hardayal Singh Shekhawat, and Rajbala Tokas, An Improved Method of
feature extraction technique for Facial Expression Recognition using Adaboost Neural Network,
International Journal of Electronics and Computer Science Engineering (IJECSE) Volume 1, Number
3 , pp: 1112-1118, 2012.
[3] Chu-Hui Lee and Tsung-Ping Huang, Comparison of Two Auto Focus Measurements DCT-STD and
DWT-STD, Proceeding of the International MultiConference of Engineers and Computer Scientists,
Vol I, IMECS, March 14, 2012, Hong Kong.
[4] David Salomon. Data Compression: The Complete Reference. fourth edition, Springer 2007.
[5] Deepak Nayak, Dipan Mehta, and Uday Desai, A Novel Algorithm for Reducing Computational
Complexity of MC-DCT in Frequency-Domain Video Transcoders, ISCAS In International
Symposium on Circuits and Systems (ISCAS 2005), 23-26 May 2005, Kobe, Japan. pages 900-903,
IEEE, 2005.
[6] Firas A. Jassim and Hind E. Qassim, Five Modulus Method For Image Compression, Signal & Image
Processing : An International Journal (SIPIJ) Vol.3, No.5, October 2012
[7] Gonzalez R. C. and Woods R. E., “Digital Image Processing”, Second edition, 2004.
[8] Ken Cabeen and Peter Gent, "Image Compression and the Discrete Cosine Transform” Math
45,College of the Redwoods.
[9] Khalid Sayood, Introduction to Data Compression, third edition. Morgan Kaufmann, 2006.
9. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.2, April 2013
39
[10] Lee J. and Vijaykrishnan N., and M. J. Irwin, Inverse Discrete Cosine Transform Architecture
Exploiting Sparseness And Symmetry Properties, Embedded & Mobile Computing Design Center
The Pennsylvania State University, NSF ITR 0082064.
[11] Tinku Acharya and Ping-Sing Tsai, JPEG2000 Standard for Image Compression Concepts,
Algorithms and VLSI Architectures, JOHN WILEY & SONS, 2005.
[12] Wallace G. K., The JPEG Still Picture Compression Standard, Communication of the ACM, Vol. 34,
No. 4, 1991, pp. 30-44.
[13] Zhang H., Zhang ., and Cao S., “Analysis and Evaluation of Some Image Compression Techniques,”
High Performance Computing in Asia- Pacific Region, 2000 Proceedings, 4th Int. Conference, vol. 2,
pp: 799-803, May, 2000.
Authors
Firas Ajil Jassim was born in Baghdad, Iraq, in 1974. He received the B.S. and M.S.
degrees in Applied Mathematics and Computer Applications from Al-Nahrain
University, Baghdad, Iraq, in 1997 and 1999, respectively, and the Ph.D. degree in
Computer Information Systems (CIS) from the Arab Academy for Banking and
Financial Sciences, Amman, Jordan, in 2012. In 2012, he joined the faculty of the
Department of Business Administration, college of Management Information System,
Irbid National University, Irbid, Jordan, where he is currently an assistance professor.
His current research interests are image compression, image interpolation, image
segmentation, image enhancement, and simulation.