The main aim of image compression is to represent the image with minimum number of bits and thus reduce the size of the image. This paper presents a Symbols Frequency based Image Coding (SFIC) technique for image compression. This method utilizes the frequency of occurrence of pixels in an image. A frequency factor, y is used to merge y pixel values that are in the same range. In this approach, the pixel values of the image that are within the frequency factor, y range are clubbed to the least pixel value in the set. As a result, there is omission of larger pixel values and hence the total size of the image reduces and thus results in higher compression ratio. It is noticed that the selection of the frequency factor, y has a great influence on the performance of the proposed scheme. However, higher PSNR values are obtained since the omitted pixels are mapped to pixels in the similar range. The proposed approach is analyzed with quantization and without quantization. The results are analyzed. This proposed new compression model is compared with Quadtree-segmented AMBTC with Bit Map Omission. From the experimental analysis it is observed that the proposed SFIC image compression scheme with both lossless and lossy techniques outperforms AMBTC-QTBO. Hence, the proposed new compression model is a better choice for lossless and lossy compression applications.
This document presents a new method for image compression called Haar Wavelet Based Joint Compression Method Using Adaptive Fractal Image Compression (DWT+AFIC). It combines discrete wavelet transform with an existing adaptive fractal image compression technique to improve compression ratio and reconstructed image quality compared to previous fractal image compression methods. The document introduces fractal image compression and its limitations, describes the proposed DWT+AFIC method and 5 other compression techniques, provides simulation results on test images showing DWT+AFIC achieves higher peak signal to noise ratios and compression ratios than other methods, and concludes DWT+AFIC decreases encoding time while increasing compression ratio and maintaining reconstructed image quality.
2 ijaems dec-2015-5-comprehensive review of huffman encoding technique for im...INFOGAIN PUBLICATION
The image processing is used in the every field of life. It is growing field and is used by large number of users. The image processing is used in order to remove the problems present within the image. There are number of techniques which are suggested in order to improve the image. For this purpose image enhancement is commonly used. The space requirements associated with the image is also very important factor. The main aim of the various techniques of image processing is to decrease the space requirements of the image. The space requirements will be minimized by the use of compression techniques. Compression techniques are lossy and lossless in nature. This paper will conduct a comprehensive survey of the lossless compression Huffman coding in detail.
Iaetsd a review on enhancement of degradedIaetsd Iaetsd
This document proposes an Image Binarization Technique to enhance degraded document images. It combines local image contrast with local image gradient to derive an adaptive local contrast map. This map is binarized and combined with Canny edge detection to identify text stroke edges. An adaptive threshold is then used to separate text from the document based on the intensities of detected edge pixels. Simulation results show the technique outperforms existing methods like local maximum/minimum and Ni-blacks approaches in metrics like F-measure, achieving simpler, more robust binarization of degraded documents.
Volumetric Medical Images Lossy Compression using Stationary Wavelet Transfor...Omar Ghazi
Abstract: The aim of the study is to reduce the size required for storage along with decreasing the bitrate and the
bandwidth for the process of sending and receiving the image. It also aims to decrease the time required for the
process as much as possible. This study proposes a novel system for efficient lossy volumetric medical image
compression using Stationary Wavelet Transform and Linde-Buzo-Gray for Vector Quantization. The system makes
use of a combination of Linde-Buzo-Gray vector quantization technique for lossy compression along with
Arithmetic coding and Huffman coding for lossless compression. The system proposed uses Stationary Wavelet
Transform and then compares the results obtained to Discrete Wavelet Transform, Lifting Wavelet Transform and
Discrete Cosine Transform at three decomposition levels. The system also compares the results obtained using
transforms with only Arithmetic Coding and Huffman Coding for Lossless Compression.The results show that the
system proposed outperforms the others.
Conference Proceedings of the National Level Technical Symposium on Emerging Trends in Technology, TECHNOVISION ’10, G.N.D.E.C. Ludhiana, Punjab, India- 9th-10th April, 2010
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
This document discusses using super-resolution-based in-painting for object removal in images. It begins with an overview of in-painting and exemplar-based in-painting methods. It then proposes a new framework that combines exemplar-based in-painting with a single-image super-resolution method. This approach improves image quality by producing high-resolution outputs with less noise compared to exemplar-based in-painting alone. The document concludes the proposed method increases robustness for applications like satellite imaging and medical imaging by providing high quality images with damaged objects removed.
The document describes a method for document image binarization using the Nilblack local thresholding method along with a post-processing step. Nilblack's method calculates thresholds using local averages and standard deviations but has limitations due to window size. The post-processing step constructs a threshold surface by interpolating edge points to remove noise far from objects more effectively. The method was tested on samples from the Tobacco800 database and showed it retains object information while suppressing background noise.
This document presents a new method for image compression called Haar Wavelet Based Joint Compression Method Using Adaptive Fractal Image Compression (DWT+AFIC). It combines discrete wavelet transform with an existing adaptive fractal image compression technique to improve compression ratio and reconstructed image quality compared to previous fractal image compression methods. The document introduces fractal image compression and its limitations, describes the proposed DWT+AFIC method and 5 other compression techniques, provides simulation results on test images showing DWT+AFIC achieves higher peak signal to noise ratios and compression ratios than other methods, and concludes DWT+AFIC decreases encoding time while increasing compression ratio and maintaining reconstructed image quality.
2 ijaems dec-2015-5-comprehensive review of huffman encoding technique for im...INFOGAIN PUBLICATION
The image processing is used in the every field of life. It is growing field and is used by large number of users. The image processing is used in order to remove the problems present within the image. There are number of techniques which are suggested in order to improve the image. For this purpose image enhancement is commonly used. The space requirements associated with the image is also very important factor. The main aim of the various techniques of image processing is to decrease the space requirements of the image. The space requirements will be minimized by the use of compression techniques. Compression techniques are lossy and lossless in nature. This paper will conduct a comprehensive survey of the lossless compression Huffman coding in detail.
Iaetsd a review on enhancement of degradedIaetsd Iaetsd
This document proposes an Image Binarization Technique to enhance degraded document images. It combines local image contrast with local image gradient to derive an adaptive local contrast map. This map is binarized and combined with Canny edge detection to identify text stroke edges. An adaptive threshold is then used to separate text from the document based on the intensities of detected edge pixels. Simulation results show the technique outperforms existing methods like local maximum/minimum and Ni-blacks approaches in metrics like F-measure, achieving simpler, more robust binarization of degraded documents.
Volumetric Medical Images Lossy Compression using Stationary Wavelet Transfor...Omar Ghazi
Abstract: The aim of the study is to reduce the size required for storage along with decreasing the bitrate and the
bandwidth for the process of sending and receiving the image. It also aims to decrease the time required for the
process as much as possible. This study proposes a novel system for efficient lossy volumetric medical image
compression using Stationary Wavelet Transform and Linde-Buzo-Gray for Vector Quantization. The system makes
use of a combination of Linde-Buzo-Gray vector quantization technique for lossy compression along with
Arithmetic coding and Huffman coding for lossless compression. The system proposed uses Stationary Wavelet
Transform and then compares the results obtained to Discrete Wavelet Transform, Lifting Wavelet Transform and
Discrete Cosine Transform at three decomposition levels. The system also compares the results obtained using
transforms with only Arithmetic Coding and Huffman Coding for Lossless Compression.The results show that the
system proposed outperforms the others.
Conference Proceedings of the National Level Technical Symposium on Emerging Trends in Technology, TECHNOVISION ’10, G.N.D.E.C. Ludhiana, Punjab, India- 9th-10th April, 2010
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
This document discusses using super-resolution-based in-painting for object removal in images. It begins with an overview of in-painting and exemplar-based in-painting methods. It then proposes a new framework that combines exemplar-based in-painting with a single-image super-resolution method. This approach improves image quality by producing high-resolution outputs with less noise compared to exemplar-based in-painting alone. The document concludes the proposed method increases robustness for applications like satellite imaging and medical imaging by providing high quality images with damaged objects removed.
The document describes a method for document image binarization using the Nilblack local thresholding method along with a post-processing step. Nilblack's method calculates thresholds using local averages and standard deviations but has limitations due to window size. The post-processing step constructs a threshold surface by interpolating edge points to remove noise far from objects more effectively. The method was tested on samples from the Tobacco800 database and showed it retains object information while suppressing background noise.
Study of Image Inpainting Technique Based on TV Modelijsrd.com
This paper is related with an image inpainting method by which we can reconstruct a damaged or missing portion of an image. A fast image inpainting algorithm based on TV (Total variational) model is proposed on the basis of analysis of local characteristics, which shows the more information around damaged pixels appears, the faster the information diffuses. The algorithm first stratifies and filters the pixels around damaged region according to priority, and then iteratively inpaint the damaged pixels from outside to inside on the grounds of priority again. By using this algorithm inpainting speed of the algorithm is faster and greater impact.
The document summarizes an automatic text extraction system for complex images. The system uses discrete wavelet transform for text localization. Morphological operations like erosion and dilation are used to enhance text identification and segmentation. Text regions are segmented using connected component analysis and properties like area and bounding box shape. The extracted text is recognized and shown in a text file. The system allows modifying the recognized text and shows better performance than existing techniques.
Dissertation synopsis for imagedenoising(noise reduction )using non local me...Arti Singh
Dissertation report for image denoising using non-local mean algorithm, discussion about subproblem of noise reduction,descrption for problem in image noise
The document discusses clustering images based on their properties. Images are converted into intensity, contrast, Weibull and fractal images. Eight properties are calculated for each image type, including brightness, standard deviation, entropy, skewness, kurtosis, separability, spatial frequency and visibility. The properties are normalized and clustered using k-means clustering. Tables show normalized property values for different image types. The clustering groups similar images based on their discriminative properties.
Abstract—The data compression and decompression
play a very important role and are necessary to minimize
the storage media and increase the data transmission in
the communication channel, the images quality based on
the evaluating and analyzing different image compression
techniques applying hybrid algorithm is the important
new approach. The paper uses the hybrid technique
applied to images sets for enhancing and increasing image
compression, and also including different advantages such
as minimizing the graphics file size with keeping the image
quality in high level. In this concept, the hybrid image
compression algorithm (HCIA) is used as one integrated
compression system, HCIA has a new technique and
proven itself on the different types of file images. The
compression effectiveness is affected by the quality of
image sensitive, and the image compression process
involves the identification and removal of redundant
pixels and unnecessary elements of the source image.
The proposed algorithm is a new approach to compute
and present the high image quality to get maximization
compression [1].
In This research can be generated more space
consumption and computation for compression rate
without degrading the quality of the image, the results of
the experiment show that the improvement and accuracy
can be achieved by using hybrid compression algorithm. A
hybrid algorithm has been implemented to compress and
decompress the given images using hybrid techniques in
java package software.
Index Terms—Lossless Based Image Compression,
Redundancy, Compression Technique, Compression
Ratio, Compression Time.
Keywords
Data Compression, Hybrid Image Compression Algorithm,
Image Processing Techniques.
The data compression and decompression
play a very important role and are necessary to minimize
the storage media and increase the data transmission in
the communication channel, the images quality based on
the evaluating and analyzing different image compression
techniques applying hybrid algorithm is the important
new approach. The paper uses the hybrid technique
applied to images sets for enhancing and increasing image
compression, and also including different advantages such
as minimizing the graphics file size with keeping the image
quality in high level. In this concept, the hybrid image
compression algorithm (HCIA) is used as one integrated
compression system, HCIA has a new technique and
proven itself on the different types of file images. The
compression effectiveness is affected by the quality of
image sensitive, and the image compression process
involves the identification and removal of redundant
pixels and unnecessary elements of the source image.
The proposed algorithm is a new approach to compute
and present the high image quality to get maximization
compression [1].
In This research can be generated more space
consumption and computation for compression rate
without degrading the quality of the image, the results of
the experiment show that the improvement and accuracy
can be achieved by using hybrid compression algorithm. A
hybrid algorithm has been implemented to compress and
decompress the given images using hybrid techniques in
java package software.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an exploratory review of soft computing techniques for image segmentation. It discusses various segmentation techniques including discontinuity-based techniques like point, line and edge detection using spatial filtering. Thresholding techniques like global, adaptive and multi-level thresholding are also covered. Region-based techniques such as region growing, region splitting/merging and morphological watersheds are summarized. The document concludes that future work can focus on developing genetic segmentation filters using a genetic algorithm approach for medical image segmentation.
On Text Realization Image SteganographyCSCJournals
In this paper the steganography strategy is going to be implemented but in a different way from a different scope since the important data will neither be hidden in an image nor transferred through the communication channel inside an image, but on the contrary, a well known image will be used that exists on both sides of the channel and a text message contains important data will be transmitted. With the suitable operations, we can re-mix and re-make the source image. MATLAB7 is the program where the algorithm implemented on it, where the algorithm shows high ability for achieving the task to different type and size of images. Perfect reconstruction was achieved on the receiving side. But the most interesting is that the algorithm that deals with secured image transmission transmits no images at all
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/person-re-identification-and-tracking-at-the-edge-challenges-and-techniques-a-presentation-from-the-university-of-auckland/
Morteza Biglari-Abhari, Senior Lecturer at the University of Auckland, presents the “Person Re-Identification and Tracking at the Edge: Challenges and Techniques” tutorial at the May 2021 Embedded Vision Summit.
Numerous video analytics applications require understanding how people are moving through a space, including the ability to recognize when the same person has moved outside of the camera’s view and then back into the camera’s view, or when a person has passed from the view of one camera to the view of another. This capability is referred to as person re-identification and tracking. It’s an essential technique for applications such as surveillance for security, health and safety monitoring in healthcare and industrial facilities, intelligent transportation systems and smart cities. It can also assist in gathering business intelligence such as monitoring customer behavior in shopping environments. Person re-identification is challenging.
In this talk, Biglari-Abhari discusses the key challenges and current approaches for person re-identification and tracking, as well as his initial work on multi-camera systems and techniques to improve accuracy, especially fusing appearance and spatio-temporal models. He also briefly discusses privacy-preserving techniques, which are critical for some applications, as well as challenges for real-time processing at the edge.
Influence of local segmentation in the context of digital image processingiaemedu
This document discusses local segmentation in digital image processing. It begins by defining local segmentation as a reasonable approach for low-level image processing that examines existing algorithms and creates new ones. Local segmentation can be applied to important image processing tasks. The document then evaluates using local segmentation for image denoising, finding it highly competitive with state-of-the-art algorithms. Local segmentation attempts to separate signal from noise on a local scale, allowing higher-level algorithms to operate directly on the signal without amplifying noise.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
Steganography is a best method for in secret communicating information during the transference of data. Images are an appropriate method that used in steganography can be used to protection the simple bits and pieces. Several systems, this one as color scale images steganography and grayscale images steganography, are used on color and store data in different techniques. These color images can have very big amounts of secret data, by using three main color modules. The different color modules, such as HSV-(hue, saturation, and value), RGB-(red, green, and blue), YCbCr-(luminance and chrominance), YUV, YIQ, etc. This paper uses unusual module to hide data: an adaptive procedure that can increase security ranks when hiding a top secret binary image in a RGB color image, which we implement the steganography in the YCbCr module space. We performed Exclusive-OR (XOR) procedures between the binary image and the RGB color image in the YCBCR module space. The converted byte stored in the 8-bit LSB is not the actual bytes; relatively, it is obtained by translation to another module space and applies the XOR procedure. This technique is practical to different groups of images. Moreover, we see that the adaptive technique ensures good results as the peak signal to noise ratio (PSNR) and stands for mean square error (MSE) are good. When the technique is compared with our previous works and other existing techniques, it is shown to be the best in both error and message capability. This technique is easy to model and simple to use and provides perfect security with unauthorized.
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...CSCJournals
High-resolution (HR) images play a vital role in all imaging applications as they offer more details. The images captured by the camera system are of degraded quality due to the imaging system and are low-resolution (LR) images. Image super-resolution (SR) is a process, where HR image is obtained from combining one or multiple LR images of same scene. In this paper, learning based single frame image super-resolution technique is proposed by using Fast Discrete Curvelet Transform (FDCT) coefficients. FDCT is an extension to Cartesian wavelets having anisotropic scaling with many directions and positions, which forms tight wedges. Such wedges allow FDCT to capture the smooth curves and fine edges at multiresolution level. The finer scale curvelet coefficients of LR image are learnt locally from a set of high-resolution training images. The super-resolved image is reconstructed by inverse Fast Discrete Curvelet Transform (IFDCT). This technique represents fine edges of reconstructed HR image by extrapolating the FDCT coefficients from the high-resolution training images. Experimentation based results show appropriate improvements in MSE and PSNR.
This document summarizes a research paper on techniques for binarizing degraded document images. It discusses how degraded documents often have mixed foreground and background pixels that need to be separated. The proposed method uses contrast adjustment, grey scale edge detection, thresholding and post-processing to binarize degraded images. It first inverts the image contrast, then uses grey scale detection to find text stroke edges. Pixels are classified and thresholding is used to create a binary image. Post-processing removes background pixels to output a clean image with only text strokes. The method is tested on degraded novel and book images and produces separated, readable text from the backgrounds.
This document discusses the application of morphological image processing in forensics for fingerprint enhancement. It provides background on morphological operations like dilation, erosion, opening and closing. It explains how these operations can be used to enhance degraded fingerprints by thickening ridges, joining broken ridges, and separating overlapped ridges. The morphological image processing concepts are implemented in Java to experimentally enhance fingerprint images and reduce noise.
Review on Image Enhancement in Spatial Domainidescitation
With the proliferation in electronic imaging devices
like in mobiles, computer vision, medical field and space field;
image enhancement field has become the quite interesting
and important area of research. These imaging devices are
viewed under a diverse range of viewing conditions and a huge
loss in contrast under bright outdoor viewing conditions; thus
viewing condition parameters such as surround effects,
correlated color temperature and ambient lighting have
become of significant importance. Therefore, Principle
objective of Image enhancement is to adjust the quality of an
image for better human visual perception. Appropriate choice
of enhancement techniques is greatly influenced by the
imaging modality, task at hand and viewing conditions.
Basically, image enhancement techniques have been classified
into two broad categories: Spatial domain image enhancement
and Frequency domain image enhancement. This survey report
gives an overview of different methodologies have been used
for enhancement under the spatial domain category. It is noted
that in this field still more research is to be done.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
This document presents a novel approach for jointly optimizing spatial prediction and transform coding in video compression. It aims to improve performance and reduce complexity compared to existing techniques. The proposed method uses singular value decomposition (SVD) to compress images. SVD decomposes an image matrix into three matrices, allowing the image to be approximated using only a few singular values. This achieves compression by removing redundant information. The document outlines the SVD approach for image compression and measures compression performance using compression ratio and mean squared error between the original and compressed images. It then discusses trends in image and video coding, including combining natural and synthetic content. Finally, it provides a block diagram of the proposed system and compares its compression performance to existing discrete cosine transform-
Study of Image Inpainting Technique Based on TV Modelijsrd.com
This paper is related with an image inpainting method by which we can reconstruct a damaged or missing portion of an image. A fast image inpainting algorithm based on TV (Total variational) model is proposed on the basis of analysis of local characteristics, which shows the more information around damaged pixels appears, the faster the information diffuses. The algorithm first stratifies and filters the pixels around damaged region according to priority, and then iteratively inpaint the damaged pixels from outside to inside on the grounds of priority again. By using this algorithm inpainting speed of the algorithm is faster and greater impact.
The document summarizes an automatic text extraction system for complex images. The system uses discrete wavelet transform for text localization. Morphological operations like erosion and dilation are used to enhance text identification and segmentation. Text regions are segmented using connected component analysis and properties like area and bounding box shape. The extracted text is recognized and shown in a text file. The system allows modifying the recognized text and shows better performance than existing techniques.
Dissertation synopsis for imagedenoising(noise reduction )using non local me...Arti Singh
Dissertation report for image denoising using non-local mean algorithm, discussion about subproblem of noise reduction,descrption for problem in image noise
The document discusses clustering images based on their properties. Images are converted into intensity, contrast, Weibull and fractal images. Eight properties are calculated for each image type, including brightness, standard deviation, entropy, skewness, kurtosis, separability, spatial frequency and visibility. The properties are normalized and clustered using k-means clustering. Tables show normalized property values for different image types. The clustering groups similar images based on their discriminative properties.
Abstract—The data compression and decompression
play a very important role and are necessary to minimize
the storage media and increase the data transmission in
the communication channel, the images quality based on
the evaluating and analyzing different image compression
techniques applying hybrid algorithm is the important
new approach. The paper uses the hybrid technique
applied to images sets for enhancing and increasing image
compression, and also including different advantages such
as minimizing the graphics file size with keeping the image
quality in high level. In this concept, the hybrid image
compression algorithm (HCIA) is used as one integrated
compression system, HCIA has a new technique and
proven itself on the different types of file images. The
compression effectiveness is affected by the quality of
image sensitive, and the image compression process
involves the identification and removal of redundant
pixels and unnecessary elements of the source image.
The proposed algorithm is a new approach to compute
and present the high image quality to get maximization
compression [1].
In This research can be generated more space
consumption and computation for compression rate
without degrading the quality of the image, the results of
the experiment show that the improvement and accuracy
can be achieved by using hybrid compression algorithm. A
hybrid algorithm has been implemented to compress and
decompress the given images using hybrid techniques in
java package software.
Index Terms—Lossless Based Image Compression,
Redundancy, Compression Technique, Compression
Ratio, Compression Time.
Keywords
Data Compression, Hybrid Image Compression Algorithm,
Image Processing Techniques.
The data compression and decompression
play a very important role and are necessary to minimize
the storage media and increase the data transmission in
the communication channel, the images quality based on
the evaluating and analyzing different image compression
techniques applying hybrid algorithm is the important
new approach. The paper uses the hybrid technique
applied to images sets for enhancing and increasing image
compression, and also including different advantages such
as minimizing the graphics file size with keeping the image
quality in high level. In this concept, the hybrid image
compression algorithm (HCIA) is used as one integrated
compression system, HCIA has a new technique and
proven itself on the different types of file images. The
compression effectiveness is affected by the quality of
image sensitive, and the image compression process
involves the identification and removal of redundant
pixels and unnecessary elements of the source image.
The proposed algorithm is a new approach to compute
and present the high image quality to get maximization
compression [1].
In This research can be generated more space
consumption and computation for compression rate
without degrading the quality of the image, the results of
the experiment show that the improvement and accuracy
can be achieved by using hybrid compression algorithm. A
hybrid algorithm has been implemented to compress and
decompress the given images using hybrid techniques in
java package software.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an exploratory review of soft computing techniques for image segmentation. It discusses various segmentation techniques including discontinuity-based techniques like point, line and edge detection using spatial filtering. Thresholding techniques like global, adaptive and multi-level thresholding are also covered. Region-based techniques such as region growing, region splitting/merging and morphological watersheds are summarized. The document concludes that future work can focus on developing genetic segmentation filters using a genetic algorithm approach for medical image segmentation.
On Text Realization Image SteganographyCSCJournals
In this paper the steganography strategy is going to be implemented but in a different way from a different scope since the important data will neither be hidden in an image nor transferred through the communication channel inside an image, but on the contrary, a well known image will be used that exists on both sides of the channel and a text message contains important data will be transmitted. With the suitable operations, we can re-mix and re-make the source image. MATLAB7 is the program where the algorithm implemented on it, where the algorithm shows high ability for achieving the task to different type and size of images. Perfect reconstruction was achieved on the receiving side. But the most interesting is that the algorithm that deals with secured image transmission transmits no images at all
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/person-re-identification-and-tracking-at-the-edge-challenges-and-techniques-a-presentation-from-the-university-of-auckland/
Morteza Biglari-Abhari, Senior Lecturer at the University of Auckland, presents the “Person Re-Identification and Tracking at the Edge: Challenges and Techniques” tutorial at the May 2021 Embedded Vision Summit.
Numerous video analytics applications require understanding how people are moving through a space, including the ability to recognize when the same person has moved outside of the camera’s view and then back into the camera’s view, or when a person has passed from the view of one camera to the view of another. This capability is referred to as person re-identification and tracking. It’s an essential technique for applications such as surveillance for security, health and safety monitoring in healthcare and industrial facilities, intelligent transportation systems and smart cities. It can also assist in gathering business intelligence such as monitoring customer behavior in shopping environments. Person re-identification is challenging.
In this talk, Biglari-Abhari discusses the key challenges and current approaches for person re-identification and tracking, as well as his initial work on multi-camera systems and techniques to improve accuracy, especially fusing appearance and spatio-temporal models. He also briefly discusses privacy-preserving techniques, which are critical for some applications, as well as challenges for real-time processing at the edge.
Influence of local segmentation in the context of digital image processingiaemedu
This document discusses local segmentation in digital image processing. It begins by defining local segmentation as a reasonable approach for low-level image processing that examines existing algorithms and creates new ones. Local segmentation can be applied to important image processing tasks. The document then evaluates using local segmentation for image denoising, finding it highly competitive with state-of-the-art algorithms. Local segmentation attempts to separate signal from noise on a local scale, allowing higher-level algorithms to operate directly on the signal without amplifying noise.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
Steganography is a best method for in secret communicating information during the transference of data. Images are an appropriate method that used in steganography can be used to protection the simple bits and pieces. Several systems, this one as color scale images steganography and grayscale images steganography, are used on color and store data in different techniques. These color images can have very big amounts of secret data, by using three main color modules. The different color modules, such as HSV-(hue, saturation, and value), RGB-(red, green, and blue), YCbCr-(luminance and chrominance), YUV, YIQ, etc. This paper uses unusual module to hide data: an adaptive procedure that can increase security ranks when hiding a top secret binary image in a RGB color image, which we implement the steganography in the YCbCr module space. We performed Exclusive-OR (XOR) procedures between the binary image and the RGB color image in the YCBCR module space. The converted byte stored in the 8-bit LSB is not the actual bytes; relatively, it is obtained by translation to another module space and applies the XOR procedure. This technique is practical to different groups of images. Moreover, we see that the adaptive technique ensures good results as the peak signal to noise ratio (PSNR) and stands for mean square error (MSE) are good. When the technique is compared with our previous works and other existing techniques, it is shown to be the best in both error and message capability. This technique is easy to model and simple to use and provides perfect security with unauthorized.
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...CSCJournals
High-resolution (HR) images play a vital role in all imaging applications as they offer more details. The images captured by the camera system are of degraded quality due to the imaging system and are low-resolution (LR) images. Image super-resolution (SR) is a process, where HR image is obtained from combining one or multiple LR images of same scene. In this paper, learning based single frame image super-resolution technique is proposed by using Fast Discrete Curvelet Transform (FDCT) coefficients. FDCT is an extension to Cartesian wavelets having anisotropic scaling with many directions and positions, which forms tight wedges. Such wedges allow FDCT to capture the smooth curves and fine edges at multiresolution level. The finer scale curvelet coefficients of LR image are learnt locally from a set of high-resolution training images. The super-resolved image is reconstructed by inverse Fast Discrete Curvelet Transform (IFDCT). This technique represents fine edges of reconstructed HR image by extrapolating the FDCT coefficients from the high-resolution training images. Experimentation based results show appropriate improvements in MSE and PSNR.
This document summarizes a research paper on techniques for binarizing degraded document images. It discusses how degraded documents often have mixed foreground and background pixels that need to be separated. The proposed method uses contrast adjustment, grey scale edge detection, thresholding and post-processing to binarize degraded images. It first inverts the image contrast, then uses grey scale detection to find text stroke edges. Pixels are classified and thresholding is used to create a binary image. Post-processing removes background pixels to output a clean image with only text strokes. The method is tested on degraded novel and book images and produces separated, readable text from the backgrounds.
This document discusses the application of morphological image processing in forensics for fingerprint enhancement. It provides background on morphological operations like dilation, erosion, opening and closing. It explains how these operations can be used to enhance degraded fingerprints by thickening ridges, joining broken ridges, and separating overlapped ridges. The morphological image processing concepts are implemented in Java to experimentally enhance fingerprint images and reduce noise.
Review on Image Enhancement in Spatial Domainidescitation
With the proliferation in electronic imaging devices
like in mobiles, computer vision, medical field and space field;
image enhancement field has become the quite interesting
and important area of research. These imaging devices are
viewed under a diverse range of viewing conditions and a huge
loss in contrast under bright outdoor viewing conditions; thus
viewing condition parameters such as surround effects,
correlated color temperature and ambient lighting have
become of significant importance. Therefore, Principle
objective of Image enhancement is to adjust the quality of an
image for better human visual perception. Appropriate choice
of enhancement techniques is greatly influenced by the
imaging modality, task at hand and viewing conditions.
Basically, image enhancement techniques have been classified
into two broad categories: Spatial domain image enhancement
and Frequency domain image enhancement. This survey report
gives an overview of different methodologies have been used
for enhancement under the spatial domain category. It is noted
that in this field still more research is to be done.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
This document presents a novel approach for jointly optimizing spatial prediction and transform coding in video compression. It aims to improve performance and reduce complexity compared to existing techniques. The proposed method uses singular value decomposition (SVD) to compress images. SVD decomposes an image matrix into three matrices, allowing the image to be approximated using only a few singular values. This achieves compression by removing redundant information. The document outlines the SVD approach for image compression and measures compression performance using compression ratio and mean squared error between the original and compressed images. It then discusses trends in image and video coding, including combining natural and synthetic content. Finally, it provides a block diagram of the proposed system and compares its compression performance to existing discrete cosine transform-
This document discusses various image compression techniques including SPIHT, SPIHT 3D, and LVL-MMC. It aims to compress color images using these methods in different color spaces to achieve high compression ratios. The document provides background on grayscale images, wavelet transforms, Haar wavelets, and the compression algorithms. It then presents results comparing the techniques based on metrics like PSNR, BPP, CR, and MSE. It concludes that LVL-MMC achieved the best compression ratio compared to SPIHT and SPIHT 3D and future work could extend the methods to multimedia files.
A hybrid predictive technique for lossless image compressionjournalBEEI
Compression of images is of great interest in applications where efficiency with respect to data storage or transmission bandwidth is sought.The rapid growth of social media and digital networks have given rise to huge amount of image data being accessed and exchanged daily. However, the larger the image size, the longer it takes to transmit and archive. In other words, high quality images require huge amount of transmission bandwidth and storage space. Suitable image compression can help in reducing the image size and improving transmission speed. Lossless image compression is especially crucial in fields such as remote sensing healthcare network, security and military applications as the quality of images needs to be maintained to avoid any errors during analysis or diagnosis. In this paper, a hybrid prediction lossless image compression algorithm is proposed to address these issues. The algorithm is achieved by combining predictive Differential Pulse Code Modulation (DPCM) and Integer Wavelet Transform (IWT). Entropy and compression ratio calculation are used to analyze the performance of the designed coding. The analysis shows that the best hybrid predictive algorithm is the sequence of DPCM-IWT-Huffman which has bits sizes reduced by 36%, 48%, 34% and 13% for tested images of Lena, Cameraman, Pepper and Baboon, respectively.
Hybrid Algorithm for Enhancing and Increasing Image Compression Based on Imag...khalil IBRAHIM
The data compression and decompression play a very important role and are necessary to minimize the storage media and increase the data transmission in the communication channel, the quality of the images based on the evaluating and analyzing different image compression techniques applying hybrid algorithm is the important new approach. The paper uses the hybrid technique applied to images sets for enhancing and increasing image compression, and also including different advantages such as minimizing the graphics file size with keeping the image quality in high level. In this concept, the hybrid image compression algorithm (HCIA) is used as one integrated compression system, HCIA has a new technique and proven itself on the different types of file images. The compression effectiveness is affected by the quality of image sensitive, and the image compression process involves the identification and removal of redundant pixels and unnecessary elements of the source image. The proposed algorithm is a new approach to compute and present the high image quality to get maximization compression [1]. In This research can be generated more space consumption and computation for compression rate without degrading the quality of the image, the results of the experiment show that the improvement and accuracy can be achieved by using hybrid compression algorithm. A hybrid algorithm has been implemented to compress and decompress the given images using hybrid techniques in java package software.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
Thesis on Image compression by Manish MystManish Myst
The document discusses using neural networks for image compression. It describes how previous neural network methods divided images into blocks and achieved limited compression. The proposed method applies edge detection, thresholding, and thinning to images first to reduce their size. It then uses a single-hidden layer feedforward neural network with an adaptive number of hidden neurons based on the image's distinct gray levels. The network is trained to compress the preprocessed image block and reconstruct the original image at the receiving end. This adaptive approach aims to achieve higher compression ratios than previous neural network methods.
Comprehensive Study of the Work Done In Image Processing and Compression Tech...IRJET Journal
This document summarizes research on image processing techniques to address redundancy. It discusses how overlapping pixels when merging images can cause redundancy, taking up extra space. It reviews papers analyzing redundancy problems from compression techniques. Lossy techniques like discrete cosine transform and lossless techniques like run length encoding and Huffman encoding are described for compressing images to reduce redundancy. The document also discusses using compression to eliminate irrelevant information from images.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
Novel hybrid framework for image compression for supportive hardware design o...IJECEIAES
Performing the image compression over the resource constrained hardware is quite a challenging task. Although, there has been various approaches being carried out towards image compression considering the hardware aspect of it, but still there are problems associated with the memory acceleration associated with the entire operation that downgrade the performance of the hardware device. Therefore, the proposed approach presents a cost effective image compression mechanism which offers lossless compression using a unique combination of the non-linear filtering, segmentation, contour detection, followed by the optimization. The compression mechanism adapts analytical approach for significant image compression. The execution of the compression mechanism yields faster response time, reduced mean square error, improved signal quality and significant compression ratio performance.
Iaetsd performance analysis of discrete cosineIaetsd Iaetsd
The document discusses image compression using the discrete cosine transform (DCT). It provides background on image compression and outlines the DCT technique. The DCT transforms an image into elementary frequency components, removing spatial redundancy. The document analyzes the performance of compressing different images using DCT in Matlab by measuring metrics like PSNR. Compression using DCT with different window sizes achieved significant PSNR values.
A REVIEW OF IMAGE COMPRESSION TECHNIQUESArlene Smith
This document reviews various image compression techniques used for medical images. It begins by discussing the need for compressing large volumes of medical images generated for storage and transmission purposes. It then summarizes several key lossless and lossy compression techniques that have been proposed in other research papers, including techniques using wavelet transforms, DCT, and Huffman encoding. The techniques are evaluated based on their advantages like preserving image quality, and limitations like being slow or expensive. Results showed compression ratios from 2.5% to over 40% were achieved without significantly degrading image quality. Overall the document provides an overview of different medical image compression methods and their performance.
ON THE IMAGE QUALITY AND ENCODING TIMES OF LSB, MSB AND COMBINED LSB-MSBijcsit
The Least Significant Bit (LSB) algorithm and the Most Significant Bit (MSB) algorithm are stenography algorithms with each one having its demerits. This work therefore proposed a Hybrid approach and compared its efficiency with LSB and MSB algorithms. The Least Significant Bit (LSB) and Most
Significant Bit (MSB) techniques were combined in the proposed algorithm. Two bits (the least significant bit and the most significant bit) of the cover images were replaced with a secret message. Comparisons were made based on Mean-Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and the encoding time between the proposed algorithm, LSB and MSB after embedding in digital images. The combined
technique produced a stego-image with minimal distortion in image quality than MSB technique independent of the nature of data that was hidden. However, LSB algorithm produced the best stego-image quality. Large cover images however made the combined algorithm’s quality better improved. The combined algorithm had lesser time of image and text encoding. Therefore, a trade-off exists between the encoding time and the quality of stego-image as demonstrated in this work.
This document compares the performance of three lossless image compression techniques: Run Length Encoding (RLE), Delta encoding, and Huffman encoding. It tests these algorithms on binary, grayscale, and RGB images to evaluate compression ratio, storage savings percentage, and compression time. The results found that Delta encoding achieved the highest compression ratio and storage savings, while Huffman encoding had the fastest compression time. In general, the document evaluates and compares the performance of different lossless image compression algorithms.
This document summarizes various image compression techniques. It discusses lossless compression techniques like run length encoding, entropy encoding, and area coding that allow perfect reconstruction of images. It also discusses lossy compression techniques like chroma subsampling, transform coding, and fractal compression that allow reconstruction of images with some loss of quality in exchange for higher compression ratios. These lossy techniques are suitable for natural images like photographs. The document provides examples and explanations of how several common compression techniques work.
This document discusses digital image processing and image compression. It covers 5 units: digital image fundamentals, image transforms, image enhancement, image filtering and restoration, and image compression. Image compression aims to reduce the size of image data and is important for applications like facsimile transmission and CD-ROM storage. There are two types of compression - lossless, where the original and reconstructed data are identical, and lossy, which allows some loss for higher compression ratios. Factors to consider for compression method selection include whether lossless or lossy is needed, coding efficiency, complexity tradeoffs, and the application.
A VIDEO COMPRESSION TECHNIQUE UTILIZING SPATIO-TEMPORAL LOWER COEFFICIENTSIAEME Publication
With the advancement of communication in recent trends, video compression plays an important role in the transmission of information on social networking and for storage with limited memory capacity. Also the inadequate bandwidth for transmission and lower quality make video compression a serious phenomenon to consider in the field of communication. There is a need to improve the video compression process which can encode the video data with low computational complexity with better quality along with maintaining speed. In this work, a new technique is developed based on the block processing utilizing the lower coefficients between frames.
Similar to Symbols Frequency based Image Coding for Compression (20)
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Introduction to AI Safety (public presentation).pptx
Symbols Frequency based Image Coding for Compression
1. Symbols Frequency based Image Coding for
Compression
Thafseela Koya Poolakkachalil
PhD Scholar
National Institute of Technology
Durgapur, India
thafseelariyas@hotmail.com
Saravanan Chandran
National Institute of Technology
Durgapur, India
dr.cs1973@gmail.com
Vijayalakshmi K.
Caledonian College of
Engineering, Oman
vijayalakshmi@caledonian.edu.om
Abstract— The main aim of image compression is to represent
the image with minimum number of bits and thus reduce the
size of the image. This paper presents a Symbols Frequency
based Image Coding (SFIC) technique for image compression.
This method utilizes the frequency of occurrence of pixels in an
image. A frequency factor, y is used to merge y pixel values that
are in the same range. In this approach, the pixel values of the
image that are within the frequency factor, y range are clubbed
to the least pixel value in the set. As a result, there is omission of
larger pixel values and hence the total size of the image reduces
and thus results in higher compression ratio. It is noticed that
the selection of the frequency factor, y has a great influence on
the performance of the proposed scheme. However, higher
PSNR values are obtained since the omitted pixels are mapped
to pixels in the similar range. The proposed approach is
analyzed with quantization and without quantization. The
results are analyzed. This proposed new compression model is
compared with Quadtree-segmented AMBTC with Bit Map
Omission. From the experimental analysis it is observed that the
proposed SFIC image compression scheme with both lossless
and lossy techniques outperforms AMBTC-QTBO. Hence, the
proposed new compression model is a better choice for lossless
and lossy compression applications.
Keywords- image compression, lossy compression, lossless
compression, compression ratio, PSNR, MSE, probability.
I. INTRODUCTION
It has been observed that there is an escalation in the use
of digital images for multimedia applications in the recent
years. Over the past few years Internet based applications such
as WhatsApp, Facebook, Twitter, several other social apps,
and websites have been influencing usage of images and
videos enormously. Initially, the focus of the users of these
applications was on chats and text messages. However, there
is a recent shift on the mode of usage of these applications
from chats to sharing information in the form of digital images
and videos. An uncompressed image consumes lot of storage
and internet bandwidth. Hence, compression of digital images
reduces the size of the image and occupies less storage and
consumes less bandwidth. Digital image compression has
been the subject of research for several decades. However,
there has been focus on color image compression research
work due to the immense amount of digital transmission of
images through various internet based applications [1]-[2].
The idea of image compression is to represent the image
with the smallest number of bits while maintaining the
essential information in the image. The compression is
achieved by exploiting the spatial and temporal redundancies.
Advances in wavelet transform and quantization has produced
algorithms that outperform image compression standards. The
three main concepts that sets limit for image compression
technique are image complexity, desired quality, and
computational cost. The visual efficiency of an image
compression technique depends directly on the amount of
visually significant information it retains [3]- [4].
Image compression is classified as lossless image
compression and lossy image compression based on whether
the reproduced image pixels values are same or different
respectively. Several research works has focused on lossy
image compression focusing on Internet based applications
[5]. However, medical image compression schemes prefer
lossless technique, where each pixel value is vital. Other
application of lossless image compression techniques include
professional photography, application of computer vision
analysis on recordings, automotive applications, input for post
processing in digital camera, scientific and artistic images [6]-
[7]. Algorithms used for lossless image compression include
lossless JPEG [8], lossless Joint Photographic Experts Group
(JPEG-LS) [9], LOCO-I [10], CALIC [11], JPEG2000
(lossless mode) [12] and JPEG XR [13]. However, these
algorithms provide a small compression ratio compared with
lossy compression as only information redundancy is removed
while perceptual redundancy remains intact [14]- [15].
Lossy schemes provide much higher compression ratios
than lossless schemes. However, the high image compression
ratio obtained in lossy compression method is at the cost of
the image quality. By lossy compression technique, the
reconstructed image is not identical to the original image, but
equitably close to it [16].
Color image processing is gaining importance due to the
significant use of digital images over the Internet [17]. Since
number of bits required to represent a color is three to four
times more than the representation of grey levels, data
compression plays a central role in the storage and
transmission of color images [18]. It is an important technique
for reducing the communication bandwidth consumption. It is
highly useful in congested network like Internet or wireless
multimedia transmission where there is low bandwidth. There
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
148 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. are various techniques for the compression of color images.
In color image compression, the color components are
separated by color transform. Each of the transformed
components is independently compressed [19]. In this
research article, compression ratio of the proposed new
compression model for lossless and lossy compression
techniques is analyzed.
II. RELATED WORKS
Jong et al. proposed a novel coding scheme for block
truncation based on vector quantizer for color image
compression in LCD overdrive. Experimental results showed
that the proposed scheme achieved higher compression ratio
and better visual quality when compared with conventional
methods. This method is suitable for the hardware
implementation in LCD overdrive due to the constant output
bit-rate and the low computational complexity [20].
Banierbah and Khamadja proposed a novel approach that
reduced the inter-band correlation by compensating the
differences between them [21]. The prediction error was
spatially coded by another method. This method had two main
advantages namely: simplicity in implementation and
application over parallel architectures. The comparison of this
technique with various other techniques proved that this
technique is very efficient and can even outperform them. This
technique is applied for lossless, lossy, and scalable coding.
This approach has provided motivation for research on
lossless and lossy compression.
Olivier et al. proposed a coding scheme in two layers. In
the first layer, the image was compressed at low bit rates even
while preserving the overall information and contours. The
second layer encoded local texture in comparison to the initial
partition [22]. This has given the motivation to target for
higher PSNR values while obtaining high compression ratio.
Min-Jen Tsai proposed a compression scheme for color
images based on stack run coding. The highlight of this
scheme was that a small number of symbol set was used to
convert images from the wavelet transform domain to the
compact data structure. The approach provided competitive
PSNR values and high quality images at the same
compression ratio when compared with other techniques [19].
This gave motivation to focus on compression technique
which gives high compression ratio and good PSNR value
with high quality reconstructed image at the same time.
Soo et al. proposed an algorithm to train the color images
based on Kohonen neural network for limited color displays.
It was observed that the Peak Signal-to-Noise Ratio (PSNR)
of the decoded image was high and that a good compression
ratio could be obtained [23]. Experimental results showed that
this method produced an average compression ratio of 13.09
and average Signal-to-Noise Ratio (SNR) of 30.69 [24].This
has motivated to achieve higher compression ratio and SNR.
Chen et al. proposed a color image compression scheme
based on moment-preserving and block truncation coding
where the input image is divided into non-overlapping blocks.
Here, the average compression ratio was achieved 14.00 [24].
This idea inspired to include block truncation during
quantization process of the proposed method.
Panos and Rabab proposed A High-Quality fixed-Length
Compression Scheme for Color Images where the Discrete
Cosine Transform (DCT) of 8x8 picture blocks of an image
was compressed by a fixed-length codewords. Fixed length
encoding scheme was simpler to implement when compared
to variable-length encoding scheme. This scheme was not
susceptible to the error propagation and synchronization
problems that were a part of variable-length coding schemes
[25]. This has shown direction to incorporate DCT based
application of the proposed approach in future work.
Jinlei Zhang et al. propsed a novel coding scheme for the
compression of hyperspectral images. The approach designed
a distributive coding scheme that fulfilled the exclusive
requirements of these images that includes lossless
compression, progressive transmission, and low complexity
onboard processing. Here, the complexity of data
decorrelation was shifted to the decoder side to achieve
lightweight onboard processing after image acquisition. The
experimental results clearly demonstrated that this scheme
achieved high compression ratio for lossless compression of
HS images with a low complexity encoder [26]. This
motivated to focus on research based on low complexity
encoder.
Pascal Peter proposed a new approach where a missing
link between the discrete colorization of Levin et al. [27] and
continuous diffusion-based inpainting in the YCbCr color
space was introduced. With the proposed colorization
technique, it was possible for the high-quality reconstruction
of color data. This motivated to include pixel replacement
technique in this paper based on the frequency of occurrence
of the pixels exploiting the fact that the human eyes are more
sensitive to structural information than color information
[28].
Kim, Han Tai and Kim proposed the salient region
detection via high–dimensional color transform and local
special support which is a novel approach to automatically
detect salient regions in an image. The approach consisted of
the local and global features which complemented each other
in the computation of the saliency map. The first step in the
formation of formation of the saliency map was using a linear
combination of colors in a high-dimensional color space. This
observation is based on the fact that salient regions often
possess distinctive features when compared to backgrounds in
human perception. Human perception is nonlinear. The
authors have shown the composition of accurate saliency map
by finding the optimal linear combination of color coefficients
in the high-dimensional color space. This is performed by
mapping the low-dimensional red, green, and blue color to a
feature vector in a high-dimensional color space. The second
step was to utilize relative location and color contrast between
superpixels as features and then to resolve the saliency
estimation from a trimap via learning based algorithm. This
step further improved the performance of the saliency
estimation. It is observed that the additional local features and
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
149 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. learning based algorithm complement the global estimation
from the high-dimensional color transform-based algorithm.
The experimental results showed that this approach is
effective in comparison with the previous state-of-art saliency
estimation methods [29]. This motivated to utilize location of
pixels and their pixel values.
Rushi Lan et al. have proposed a local quaternion local
ranking binary pattern (QLRBP) for color images. This
method is different from the traditional descriptors where they
are extracted from each color channel separately or from
vector representations. QLRBP works on the quaternionic
representation (QR) of the color image which encodes a color
pixel using quaternion. QLRBP is able to handle all color
channels directly in the quaternionic domain and include their
dimensions simultaneously. QLRBP uses a reference
quaternion to rank QRs of two color pixels, and performs a
local binary coding on the phase of the transformed result to
generate local descriptors of the color image. Experiments
demonstrate that the QLRBP outperforms several state-of-art
methods [30]. This motivated to club the color channels
together while reconstructing the image.
Yu-Chen Fan et al. proposed a luminance and color
correction scheme for multiview image compression for a 3-
DTV system. A 3-D discrete cosine transform (3-D DCT)
based on cubic memory was proposed for image compression
according to the characteristics of luminance and
chrominance. After designing, the chip could achieve the goal.
This method provided a solution for 3-D multiview
compression and storage research and compression [31].
Seyum and Nam proposed the Hierarchical Prediction and
context Adaptive Coding for Lossless Color Image
Compression where the correlation between the pixels of an
RGB image was removed by a color transform that was
reversible. A conventional lossless image coder was used to
compress luminance channel Y. Hierarchical decomposition
and directional prediction was used for analyzing the pixels in
the chrominance channel. In the later stage, arithmetic coding
was applied to the prediction residuals. From the results, it was
observed that the average bit rate reductions over JPEG2000
for Kodak image set, some medical images, and digital camera
images were 7.105%, 13.55%, and 5.52% respectively [7].
This motivated to split the color channels in the beginning.
Jose et al. proposed logarithmical hoping encoding
algorithm which used Weber-Fechner law to encode the error
between color component predictions and the actual values.
Experimental results showed that this algorithm is suitable for
static images based on adaptive logarithmical quantization [4].
This motivated to do future research work where the error
between the replaced color component and the actual values
can be estimated.
Wu-Lin et al. proposed a novel color image compression
technique based on block truncation where quadtree
segmentation technique was employed to partition the color
image. Experimental results showed that the proposed
approach cuts down the bit rates significantly while keeping
good quality for the reconstructed image [1]. This motivated
to direct the research in a way that high quality images are
obtained at minimum bit rates.
Mohamed et al. proposed a lossless image compression
technique where the prediction step was combined with the
integer wavelet transform. Experimental results showed that
this technique provided higher compression ratios than
competing techniques [14].
Wu-Lin Chen et al. proposed the Quadtree-segmented
AMBTC with Bit Map Omission (AMBTC-QTBO) [1]. In
this approach the color image is decomposed into three
grayscale images. Then each grayscale image is partitioned
into a set of variable-sized blocks using quadtree
segmentation. The algorithm for AMBTC-QTBO is as
follows:
1. Decompose color image into three grayscale images
2. Partition grayscale images to 16x16 non overlapping
block sets
3. Compute the block means and two quantization levels a,
b using AMBTC [32] for each block.
4. If|a − b| ≤ 𝑇𝐻𝑄𝑇, encode 16x16 block using its block
mean and go to 8. Otherwise, divide this block into 8x8
equal-size pixel blocks.
5. Calculate block mean and two level quantization for 8x8
blocks.
6. If|a − b| ≤ 𝑇𝐻𝐵𝑂, encode 8x8 block using its block
mean and go to 8. Otherwise, divide this block into 4x4
equal-size pixel blocks.
7. Calculate block mean and two level quantization for 4x4
blocks. If|a − b| ≤ 𝑇𝐻𝐵𝑂, encode 4x4 block using its
block mean and go to 8. Otherwise, encode 4x4 block by
AMBTC.
8. Go to 3 if there are any blocks to be processed.
In the above algorithm, the predefined threshold THQT is
used to determine whether the grayscale image blocks are to
be further subdivided. Adding to the above, another pre-
defined threshold THBO is used to determine the block
activity for 4x4 image blocks in the bit map omission
technique. In order to perform performance comparison, the
proposed method in this paper has been compared with
AMBTC-QTBO.
The aim of this research article is to propose a novel
compression scheme for image namely, Symbols Frequency
based Image Coding (SFIC) for Color Image Compression.
The following Section III describes the proposed SFIC
compression technique. The following Section IV
Experiments and Results, analyses the proposed SFIC with
lossless and lossy conditions and Quadtree-segmented
AMBTC with Bit Map Omission [1]. Standard images are
used in this experiment. Conclusions are drawn in section V.
III. IMAGE COMPRESSION
Two essential criteria that are used to measure the
performance of a compression scheme are: Compression
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
150 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. Ratio (CR) and Peak Signal to Noise Ratio (PSNR), which is
the measurement of the quality of the reconstructed image.
a) Compression Ratio
The Compression Ratio (CR) is defined as the ratio
between the size of the original image (n1) and the size of the
compressed image (n2) [33].
𝐶𝑅 = 𝑛1/𝑛2
b) Peak Signal to Noise Ratio
PSNR is an expression for the ratio between the maximum
possible value (power) of a signal and the power of distorting
noise that affects the quality of its representation [34].
The mathematical representation of the PSNR is as follows:
where the MSE (Mean Squared Error) is:
The three basic steps in the compression of still images are
transformation, quantization and encoding. During the
transformation step, the data set is transformed into another
equivalent data set by the mapper. During the quantization
phase, the quantizer reduces the precision of the mapped data
set in agreement with a previously recognized reliability
criterion. In the quantization process, scaling of data set by
quantization factor takes place whereas in thresholding, all
trivial samples are eliminated. These two processes are
responsible for the introduction of data loss and degradation
of quality. The overall number of bits required to represent the
data set is reduced in the encoding phase [2]- [35].
In lossless compression, the quantization step is eliminated
since it introduces quantization errors that inhibit the perfect
reconstruction of the image. The quantization step is usually
used to turn the transformation coefficients from their float
format to an integer format [14]. In the case of lossless
compression, most color transforms are not used due to their
un-invertibility with integer arithmetic. As a result, invertible
version of color transform, the Revertible Color Transform
(RCT) is used in JPEG2000 [12].
In the case of lossy compression, since all trivial samples
are eliminated during quantization, higher compression ratios
are obtained when compared to lossless compression, but at
the cost of poor quality of the reconstructed image.
A. Proposed SFIC Encoding Scheme
In this section, we explain the new compression model
SFIC that utilizes the pixel values that are in the same range.
The block diagram of the SFIC is displayed in following
figure 1.
First, the image is transformed into a matrix. Further, the
R, G, B components (xa(i,j)) of the image is extracted. In
SFIC, pixel values of the R, G, and B components in the
integer format are not converted to YCbCr. This is eliminated
as the reverse conversion from float to integer values is not
taking place. In the next step, the encoding takes place. There
are three stages in the encoder. In the first stage, the pixel
values, hereafter named as symba and their frequency of
occurrence in each component, hereafter named as ncounta is
calculated. symba and ncounta of pixels in each component is
obtained using algorithm 1. In algorithm 1, xa(i,j) , the R,G,B
components of the image is used as the input and seq_vectora,
which is the column wise representation of xa(i,j) is found. In
the next step, the pixel values are sorted and stored in
symba.This is found by extracting the minimum to maximum
values of pixels in seq_vectora. In the next step of algorithm
1, the frequency of occurrence of each pixel in symba is
obtained by histogram analysis and is stored as ncounta. The
overall frequency based coding scheme takes place in
algorithm 2, which is summarized as below.
Fig. 1: Block Diagram of proposed SFIC
In algorithm 2, a frequency factor, y is used to merge y
pixel values. For each of the y symba which are already in
ascending order, the minimum pixel value among the y symba
is made as the new pixel value. The average ncounta of the y
pixels is made as the new ncounta. These steps in algorithm 2
result in the reduction of number of distinct pixels. In the third
stage of the encoder, unique symbols and count are extracted
Algorithm 1 Calculation of symbols and count
if xa(i,j) then
seq_vectora [xa(i,j) (:)]'
symba minimum to maximum of seq_vectora
ncounta histogram bin values of symba
else
exit
end if
(3)
(2)
(1)
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
151 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. based on the output obtained from algorithm 2. In the next step
compression ratio is calculated.
Algorithm 2 Calculation of frequency symbols and count
if xa(i,j) then
Calculate symba and ncounta by Algorithm 1
zra number of elements in symba
y frequency number
for i 0:y:zra
symba(i+1:i+y) minimum of symba(i+1:i+y)
ncounta (i+1:i+y) average of the of ncounta(i+1:i+y)
end
else
exit
end if
Decoding phase consists of two main stages. In the first
stage, space is allocated for thresholder image based on image
matrix of each component. In the next stage, the decoder loops
over all rows and columns to get pixel value. Here, decoder
checks pixel value of the pixel at a location. If the pixel value
is less than symba, the decoder assign the value of symba as the
new value based on the value of the pixel, otherwise the pixel
value remains unchanged and thus the new thresholder matrix
is formed for each component.
The output of the extracted new thresholder matrix for
each component of the decoder is concatenated to retrieve the
original image and then the PSNR is calculated.
IV. RESULTS
In this section, we discuss the results with standard images.
SFIC is tested with different frequency factor, y and with a set
of test color images shown in table IV (Lena, Peppers,
Airplane, House of size 512 x 512 and Jelly beans of size 256
x 256). Initially a wide range of y values were used. However,
in this research article, the y values that showed abrupt
changes are mentioned. All the test images are in the standard
Tiff format. Here compression ratio and the quality of image
in terms of PSNR are calculated as discussed in section III.
The following table I shows the lossless compression ratio
for the standard images and their average using the proposed
SFIC image compression scheme for different frequency
factor, y.
Here, the results are shown for lossless approach. CR
(14.79) for Lena image is the highest when y = 40, whereas
for Peppers, CR (15.13) is highest when y = 110. In the case
of Airplane, CR (11.57) is at its peak when y = 100. House has
its highest CR (11.83) when y = 20 while Jelly beans has
highest CR (10.24) when y = 50. The analysis work confirms
that the selection of frequency factor plays vital role in
compression. The y has a great influence on the performance
of SFIC. Further, the experiment results shows that SFIC gives
maximum average compression ratio (10.14) when the
frequency factor, y is 20.
Table I Compression Ratio (lossless) of standard images
using proposed SFIC for different frequency factor, y.
Table II PSNR for images using proposed SFIC lossless
image compression scheme for different frequency factor, y.
The above table II shows the PSNR for the standard
images using the proposed SFIC image compression scheme
(lossless) for different frequency factor, y. When y = 10, the
highest PSNR value is 38 for the Airplane image. The least
PSNR value is 25 exhibited by Lena when y = 10. For the
frequency factors (y) 20, 30, 40, 50, 100, or 110, the PSNR
value for Lena is 44 and Jelly beans is 50. This is because the
symbols in the symbol table do not differ much at these values
of y and hence there is no difference between the
reconstructed images for these values of y.
Table III MSE for images using proposed SFIC lossless
image compression scheme for different frequency factor, y.
Images
y
10 20 30 40 50 100 110
Lena 7.67 10.71 10.89 14.79 7.96 8.63 2.70
Peppers 8.81 9.54 7.65 6.05 6.22 6.57 15.13
Airplane 7.80 10.62 6.55 9.96 10.45 11.57 3.03
House 8.29 11.83 12.24 11.46 8.11 5.46 10.03
Jelly
beans 7.24 7.99 6.87 7.24 10.24 3.69 2.51
Average 7.96 10.14 8.84 9.90 8.60 7.18 6.68
Images
y
10 20 30 40 50 100 110
Lena 25 44 44 44 44 44 44
Peppers 29 ∞ ∞ ∞ ∞ ∞ ∞
Airplane 38 ∞ ∞ ∞ ∞ ∞ ∞
House 29 ∞ ∞ ∞ ∞ ∞ ∞
Jelly beans 37 50 50 50 50 50 50
Images
y
10 20 30 40 50 100 110
Lena 186 2.9 2.9 2.9 2.9 2.9 2.9
Peppers 82 0.0 0.0 0.0 0.0 0.0 0.0
Airplane 10 0.0 0.0 0.0 0.0 0.0 0.0
House 90 0.0 0.0 0.0 0.0 0.0 0.0
Jelly
beans 14 0.6 0.6 0.6 0.6 0.6 0.6
Average 76 0.70 0.70 0.70 0.70 0.70 0.70
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
152 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. The above table III shows the MSE for the images using
the proposed SFIC image compression scheme (lossless) for
different frequency factor, y. MSE is inversely proportional to
PSNR and this is reflected in table III. When y is 20, 30, 40,
50, 100 and 110 for Peppers, Airplane and House, MSE is 0.
Table IV Test Images and Reconstructed Images with
PSNR(lossless SFIC)
The above table IV shows the test images and
reconstructed Images with PSNR for lossless SFIC. Here,
Lena, Peppers, Airplane, House are of size 512 x 512 and Jelly
beans is of size 256 x 256. Here, the reconstructed images with
peak PSNR is displayed. The same test images is also used for
lossy SFIC. The PSNR value obtained after application of
lossy SFIC is displayed in table VI.
The following table V shows the compression ratio(lossy) for
the standard images and their average using the proposed
SFIC image compression scheme for different frequency
factor, y. The results above are for lossy approach. CR (14.44)
for Lena image is the highest when y = 40, whereas for
Peppers, CR (12.43) is highest when y = 20. In the case of
Airplane, CR (11.37) is at its peak when y = 100. House has
its highest CR (15.04) when y = 30 while Jelly beans has
highest CR (16.97) when y = 40.
Table V Compression Ratio lossy of standard images using
proposed SFIC for different frequency factor, y.
The analysis work confirms that the selection of frequency
factor plays vital role in compression. The y has a great
influence on the performance of SFIC. Further, the experiment
results shows that SFIC gives maximum average compression
ratio (12.44) when the frequency factor, y is 40.
Table VI PSNR (lossy) for images using proposed SFIC
image compression scheme for different frequency factor, y.
The above table VI shows the PSNR for the standard
images using the proposed SFIC image compression scheme
(lossy) for different frequency factor, y. When y = 10, the
highest PSNR value is 35 for the Airplane and Jelly beans
images. The least PSNR value is 26 exhibited by Lena when
y = 10. For the frequency factors (y) 20, 30, 40, 50, 100, or
110, the PSNR value for Lena is 42, Peppers is 47, Airplane
is 59, House and Jelly beans is 44. This is because the symbols
in the symbol table do not differ much at these values of y and
hence there is no difference between the reconstructed images
for these values of y.
Images
y
10 20 30 40 50 100 110
Lena 7.57 10.53 10.70 14.44 7.86 8.50 2.69
Peppers 8.70 12.43 9.40 7.10 7.32 7.82 2.49
Airplane 7.72 10.45 6.48 9.82 10.29 11.37 3.02
House 8.49 14.44 15.04 13.88 5.69 5.97 11.83
Jelly
beans 8.75 12.07 7.24 16.97 11.08 3.79 2.56
Average 8.25 11.98 9.77 12.44 8.45 7.49 8.82
Name Test Image Reconstructed Image PSNR
Lena
44
Peppers
Infinity
Airplane
Infinity
House
Infinity
Jelly
beans 50
Images
y
10 20 30 40 50 100 110
Lena 26 42 42 42 42 42 42
Peppers 27 47 47 47 47 47 47
Airplane 35 59 59 59 59 59 59
House 28 44 44 44 44 44 44
Jelly beans 35 44 44 44 44 44 44
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
153 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
7. Table VII Comparison of Compression Ratio (CR), PSNR
of SFIC and AMBTC-QTBO.
In the above table VII, the proposed image coding scheme
(lossless and lossy technique) is compared with Quadtree-
segmented AMBTC with Bit Map Omission (AMBTC-
QTBO) and the results are analyzed. The average CR using
SFIC in lossless scheme is 10.68, whereas with lossy scheme
is 11.31. This increase in CR is due to the presence of
quantization step in lossy scheme. In the case of AMBTC-
QTBO, the average CR is 5.32. These results indicate that the
CR obtained by proposed lossless and lossy scheme is more
than twice the CR obtained by AMBTC-QTBO. The average
PSNR value with SFIC in lossless scheme is infinity whereas
with SFIC in lossy scheme is 48. The reduction in the PSNR
value in the case of lossy scheme is justifiable as there is loss
of data due to quantization in the reconstructed image. In the
case of AMBTC- QTBO, PSNR is 32. This is far less than the
PSNR obtained by SFIC in both lossless and lossy scheme.
From the analysis it is seen that SFIC image compression
scheme with lossless and lossy techniques outperforms
AMBTC- QTBO.
V. CONCLUSION
In this research article, a novel approach for image
compression based on the probability of the frequency of
occurrence of pixels has been proposed. This method is
compared with the Quadtree-segmented AMBTC with Bit
Map Omission. The average CR with SFIC in lossless scheme
is 10.68, whereas with lossy scheme is 11.31. This increase
in CR is due to the presence of quantization step in lossy
scheme. This means that SFIC with lossless and lossy scheme
provides 50% better color image compression than AMBTC-
QTBO. SFIC in both lossless and lossy schemes also
produced higher PSNR than AMBTC-QTBO showing better
picture quality for the regenerated image. From the analysis
it is seen that SFIC image compression scheme with lossless
and lossy techniques outperforms AMBTC- QTBO. Hence
SFIC is applicable is a better choice for lossless and lossy
applications. For example, SFIC approach with lossless
technique is applicable in medical, scientific, and artistic
images. SFIC approach with lossy technique is applicable for
WhatsApp, Facebook, Twitter and other social media apps
and websites.
REFERENCES
[1] W. L. Chen, Y. C. Hu, K. Y. Liu, C. C. Lo and C. H. Wen, "Variable-
Rate Quadtree-segmented Block Truncation Coding for Color Image
Compression," International Journal of Signal Processing, Image
Processing and Pattern Recognition, vol. VII, no. 1, pp. 65-76, 2014.
[2] G. K. Kharate and V. H. Patil, "Color Image Compression Based On
Wavelet Packet Best Tree," IJCSI International Journal of Computer
Science Issues, vol. III, no. 2, pp. 31-35, 2010.
[3] M. J. Nadenau, J. Reichel and M. Kunt, "Wavelet-Based Color Image
Compression: Exploiting the Contrast Sensitivity Function," IEEE
Transactions on Image Processing, vol. XII, no. 1, pp. 58-70, 2003.
[4] J. J. Garcia Aranda, M. G. Casquete, M. C. Cueto, J. N. Salmeron and
F. G. Vidal, "Logarithmical Hopping Encoding: A Low
Computational Complexity Algorithm for Image Compression," IET
Image Processing, vol. IX, no. 8, pp. 643-651, 2014.
[5] C. San Jose, "Requirements for An Extension of HEVC for Coding of
Screen Content, document N14174". USA Patent ISO/IEC JTC 1/SC
29/WG 11,, January 2014.
[6] A. Weinlich, P. Amon, A. Hutter and A. Kaup, "Probability
Distribution Estimation for Autoregressive Pixel-Predictive Image
Coding," IEEE Transactions on Image Processing, vol. XXV, no. 3,
pp. 1382-1395, 2016.
[7] S. Kim and N. I. Cho, "Hierarchical Prediction and context Adaptive
Coding for Lossless Color Image Compression," IEEE Transactions
on Image Processing, vol. XXIII, no. 1, pp. 445-449, 2014.
[8] W. B. Pennebaker and J. L. Mitchell, "JPEG Still Image Data
Compression Standard," Van Nostrand Reinhold, 1993.
[9] "ISO/IEC Standard 14495-1," Information Technology—Lossless and
Near-Lossless Compression of Continuous-Tone Still Images (JPEG-
LS), April 1999.
[10] . M. Weinberger, . G. Seroussi and G. Sapiro, "The LOCO-I lossless
image compression algorithm: Principles and standardization into
JPEG-LS," IEEE Transactions on Image Processing, vol. IX, no. 8, p.
1309–1324, 2000.
[11] . X. Wu and N. Memon, "Context-based, adaptive, lossless image
coding," IEEE Transactions on Communications, vol. XLV, no. 4, p.
437–444, 1997.
[12] "Information Technology—JPEG 2000 Image Coding System—Part
1: Core Coding System," INCITS/ISO/IEC Standard 15444-1, 2000.
[13] "ITU-T and ISO/IEC, JPEG XR Image Coding System—Part 2:
Image Coding Specification," ISO/IEC Standard 29199-2, 2011.
[14] M. M. Fouad and R. M. Dansereau, "Lossless Image Compression
Using A Simplified MED Algorithm with Integer Wavelet
Transform," I.J. Image, Graphics and Signal Processing, vol. 1, pp.
18-23, 2014.
[15] W. J. Weinberger, G. Seroussi and G. Sapiro, "The LOCO-I Lossless
Image Compression Algorithm: Principles and standardization into
JPEG-LS," IEEE Transactions on Image Processing, vol. IX, no. 8,
pp. 1309-1324, 2000.
[16] S. Aggrawal and P. l. Srivastava, "Overview of Image Compression
Techniques," Journal of Computer Programming and Multimedia,
vol. 1, no. 2, 2016.
[17] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Pearson
Education, 2016, p. 27.
[18] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Pearson
India Education Services Pvt.Ltd, 2016, p. 454.
[19] M. T. Tsai, "Very Low Bit Rate Color Image Compression by Using
Stack-Run-End-Coding," IEEE Transactions on Consumer
Electronics, vol. XLVI, no. 2, pp. 368-374, 2000.
[20] J. W. Han, M. C. hwang, S. G. Kim, T. H. You and S. J. Ko, "Vector
Quantizer based Block Truncation Coding for Color Image
Compression in LCD Overdrive," IEEE Transactions on Consumer
electronics, vol. LIV, no. 4, pp. 1839-1845, 2008.
Image SFIC
CR
Lossless
SFIC
CR
Lossy
AMBTC
CR
SFIC
PSNR
Lossless
SFIC
PSNR
Lossy
AMBTC
PSNR
Lena 10.71 14.44 5.99 44 42 33
Peppers 9.54 7.10 6.02 ∞ 47 32
Airplane 10.62 9.82 4.17 ∞ 59 32
House 11.83 13.88 5.10 ∞ 44 30
Average 10.68 11.31 5.32 ∞ 48 32
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
154 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
8. [21] S. Benierbah and M. Khamadja, "Compression of Colour Images by
Inter-band Compensated Prediction," IEE Proceedings - Vision,
Image and Signal Processing, vol. CLIII, no. 2, pp. 237-243, 2006.
[22] O. Deforges, M. Babel, L. Bedat and J. Ronsin, "Color LAR Codec:
A Color Image representation and Compression Scheme Based on
Local Resolution Adjustment and self-Extracting Region
representation," IEEE Transactions on Circuits and Systems for Video
Technology, vol. XVII, no. 8, pp. 974-987, 2007.
[23] S. C. Pei and Y. S. Lo, "Color Image Compression and Limited
Display Using Self-organization Kohonen Map," IEEE Transactions
on Circuits and Systems for Video Technology, vol. VIII, no. 2, pp.
191-205, 1998.
[24] C. K. Yang, J. C. Lin and W. H. Tsai, "Color Image Compression by
Moment-Preserving and Block Truncation Coding Techniques," IEEE
Transactions On Communications, vol. XLV, no. 12, pp. 1513-1516,
1997.
[25] p. Nasiopoulos and R. K. Ward, "A High-Quality fixed-Length
Compression Scheme for Color Images," IEEE Transactions on
Communications, vol. XLIII, no. 11, pp. 2672-2677, 1995.
[26] J. Zhang, H. Li and C. W. Chen, "Distributed Lossless Coding
Techniques for Hyperspectral Images," IEEE Journal of Selected
Topics in Signal Processing, vol. IX, no. 6, pp. 977-989, 2015.
[27] A. Levin, D. Lischinski and Y. Weiss, "Colorization using
Optimization," CM Transactions on Graphics, vol. XXIII, 2004.
[28] P. Peter, L. Kaufhold and J. Weickert, "Turning Diffusion-based
Image Colorization into Efficient Color Compression," IEEE
Transactions on Image Processing, vol. XXVI, no. 2, 2017.
[29] J. Kim, D. Han, Y. W. Tai and K. Junmo, "Salient Region Detection
via High-Dimensional Color Transform and Local Spatial Support,"
IEEE Transactions on Image Processing, vol. XXV, no. 1, pp. 9-23,
2016.
[30] R. Lan, Y. Zhou and Y. Y. Tang, "Quaternionic Local Ranking Binary
Pattern: A Local Descriptor of Color Images," IEEE Transactions on
Image Processing, vol. XXV, no. 2, pp. 566-579, February 2016.
[31] Y. C. Fan, J. L. You, J. H. Shen and C. Hung, "Luminance and Color
Correction of Multiview Image compression for 3-DTV System,"
IEEE Transactions on Magnetics, vol. L, no. 7, July 2014.
[32] D. Halverson, N. Griswold and G. Wise, "A Generalized Block
truncation Coding Algorithm for Image Compression," IEEE
Transactions on Acoustics Speech Signal Processing, vol. XXXII, no.
3, pp. 664-668, 1984.
[33] A. M. Raid, W. M. Khedr, M. A. El-dosuky and W. Ahmed, "JPEG
Image Compression Using Discrete Cosine Transform - A Survey,"
International Journal of Computer Science &Engineering, vol. V, no.
2, pp. 39-47, 2014.
[34] [Online]. Available: http://www.ni.com/white-paper/13306/en/.
[Accessed 18 August 2017].
[35] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Pearson
India Education Pvt. Ltd, 2016, pp. 536-537.
[36] U. Bayazit, "Adaptive Spectral Transform for Wavelet-Based Color
Image Compression," IEEE Transactions on Circuits and Systems for
Video Technology, vol. XXI, no. 7, pp. 983-992, 2011.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
155 https://sites.google.com/site/ijcsis/
ISSN 1947-5500