The document presents a new efficient color image compression technique that aims to improve the quality of decompressed images while achieving higher compression ratios. It does this by compressing important edge parts of the image differently than non-edge background parts. Specifically, it applies low-quality lossy compression to non-edge parts and high-quality lossy compression to edge parts. The technique uses edge detection, adaptive thresholding based on local variance and mean, and discrete cosine transform followed by quantization and entropy encoding. Experimental results on various images show it achieves better compression ratios, lower bit rates, and higher peak signal to noise ratios compared to non-adaptive methods.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
The Urban Surveillance Systems generate huge amount of video and image data and impose high pressure
onto the recording disks. It is obvious that the research of video is a key point of big data research areas.
Since videos are composed of images, the degree and efficiency of image compression are of great
importance. Although the DCT based JPEG standard are widely used, it encounters insurmountable
problems. For instance, image encoding deficiencies such as block artifacts have to be removed frequently.
In this paper, we propose a new, simple but effective method to fast reduce the visual block artifacts of DCT
compressed images for urban surveillance systems. The simulation results demonstrate that our proposed
method achieves better quality than widely used filters while consuming much less computer CPU
resources.
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesDr. Michael Agbaje
This work examines the effect of block sizes on attributes (robustness, capacity, time of watermarking, visibility and distortion) of watermarked digital images using Discrete Cosine Transform (DCT) function. The DCT function breaks up the image into various frequency bands and allows watermark data to be easily embedded. The advantage of this transformation is the ability to pack input image data into a few coefficients. The block size 8 x 8 is commonly used in watermarking. The work investigates the effect of using block sizes below and above 8 x 8 on the attributes of watermark. The attributes of robustness and capacity increase as the block size increases (62-70db, 31.5-35.9 bit/pixel). The time for watermarking reduces as the block size increases. The watermark is still visible for block sizes below 8 x 8 but invisible for those above it. Distortion decreases sharply from a high value at 2 x 2 block size to minimum at 8 x 8 and gradually increases with block size. The overall observation indicates that watermarked image gradually reduces in quality due to fading above 8 x 8 block size. For easy detection of image against piracy the block size 16 x 16 gives the best output result because it closely resembles the original image in terms of visual quality displayed despite the fact that it contains a hidden watermark.
Halftoning-based BTC image reconstruction using patch processing with border ...TELKOMNIKA JOURNAL
This paper presents a new halftoning-based block truncation coding (HBTC) image reconstruction using sparse representation framework. The HBTC is a simple yet powerful image compression technique, which can effectively remove the typical blocking effect and false contour. Two types of HBTC methods are discussed in this paper, i.e., ordered dither block truncation coding (ODBTC) and error diffusion block truncation coding (EDBTC). The proposed sparsity-based method suppresses the impulsive noise on ODBTC and EDBTC decoded image with a coupled dictionary containing the HBTC image component and the clean image component dictionaries. Herein, a sparse coefficient is estimated from the HBTC decoded image by means of the HBTC image dictionary. The reconstructed image is subsequently built and aligned from the clean, i.e. non-compressed image dictionary and predicted sparse coefficient. To further reduce the blocking effect, the image patch is firstly identified as “border” and “non-border” type before applying the sparse representation framework. Adding the Laplacian prior knowledge on HBTC decoded image, it yields better reconstructed image quality. The experimental results demonstrate the effectiveness of the proposed HBTC image reconstruction. The proposed method also outperforms the former schemes in terms of reconstructed image quality.
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
ROI Based Image Compression in Baseline JPEGIJERA Editor
To improve the efficiency of standard JPEG compression algorithm an adaptive quantization technique based on the support for region of interest of compression is introduced. Since this is a lossy compression technique the less important bits are discarded and are not restored back during decompression. Adaptive quantization is carried out by applying two different quantization to the picture provided by the user. The user can select any part of the image and enter the required quality for compression. If according to the user the subject is more important than the background then more quality is provided to the subject than the background and vice- versa. Adaptive quantization in baseline sequential JPEG is carried out by applying Forward Discrete Cosine Transform (FDCT), two different quantization provided by the user for compression, thereby achieving region of interest compression and Inverse Discrete Cosine Transform (IDCT) for decompression. This technique makes sure that the memory is used efficiently. Moreover we have specifically designed this for identifying defects in the leather samples clearly.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and
bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may
exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas
of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering
approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out
the discontinuities that appear across DCT block boundaries. Although some of these approaches are able
to decrease the severity of these unwanted artifacts to some extent, other approaches have certain
limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking
algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing
smoothening, detection of blocked edges and then filtering the difference between the pixels containing the
blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an
image and therefore increases the subjective as well as objective quality of the reconstructed image.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract—The data compression and decompression
play a very important role and are necessary to minimize
the storage media and increase the data transmission in
the communication channel, the images quality based on
the evaluating and analyzing different image compression
techniques applying hybrid algorithm is the important
new approach. The paper uses the hybrid technique
applied to images sets for enhancing and increasing image
compression, and also including different advantages such
as minimizing the graphics file size with keeping the image
quality in high level. In this concept, the hybrid image
compression algorithm (HCIA) is used as one integrated
compression system, HCIA has a new technique and
proven itself on the different types of file images. The
compression effectiveness is affected by the quality of
image sensitive, and the image compression process
involves the identification and removal of redundant
pixels and unnecessary elements of the source image.
The proposed algorithm is a new approach to compute
and present the high image quality to get maximization
compression [1].
In This research can be generated more space
consumption and computation for compression rate
without degrading the quality of the image, the results of
the experiment show that the improvement and accuracy
can be achieved by using hybrid compression algorithm. A
hybrid algorithm has been implemented to compress and
decompress the given images using hybrid techniques in
java package software.
Index Terms—Lossless Based Image Compression,
Redundancy, Compression Technique, Compression
Ratio, Compression Time.
Keywords
Data Compression, Hybrid Image Compression Algorithm,
Image Processing Techniques.
FAN search for image copy-move forgery-amalta 2014SondosFadl
The proposed Fan Search (FS) algorithm starts once a duplicated block is detected. Instead of exhaustive search for all blocks,the nearby blocks of the detected block are examined first in a spiral order.
The compression is a process of Image Processing which interested to change the information representation in order to reduce the stockage capacity and transmission time. In this work we propose a new image compression algorithm based on Haar wavelets by introducing a compression coefficient that controls the compression levels. This method reduces the complexity in obtaining the desired level of compression from the original image only and without using intermediate levels.
Quality Assessment of Gray and Color Images through Image Fusion TechniqueIJEEE
. Image fusion is an emerging trend in the digital image processing to enhance images. In image fusion two or more images can be fused (combined) to obtain an enhanced image. In the present work image fusion technology has been used to enhance a given input image. Image fusion is used here to combine two images which contains complementary information.
Evaluation of graphic effects embedded image compression IJECEIAES
A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
The Urban Surveillance Systems generate huge amount of video and image data and impose high pressure
onto the recording disks. It is obvious that the research of video is a key point of big data research areas.
Since videos are composed of images, the degree and efficiency of image compression are of great
importance. Although the DCT based JPEG standard are widely used, it encounters insurmountable
problems. For instance, image encoding deficiencies such as block artifacts have to be removed frequently.
In this paper, we propose a new, simple but effective method to fast reduce the visual block artifacts of DCT
compressed images for urban surveillance systems. The simulation results demonstrate that our proposed
method achieves better quality than widely used filters while consuming much less computer CPU
resources.
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesDr. Michael Agbaje
This work examines the effect of block sizes on attributes (robustness, capacity, time of watermarking, visibility and distortion) of watermarked digital images using Discrete Cosine Transform (DCT) function. The DCT function breaks up the image into various frequency bands and allows watermark data to be easily embedded. The advantage of this transformation is the ability to pack input image data into a few coefficients. The block size 8 x 8 is commonly used in watermarking. The work investigates the effect of using block sizes below and above 8 x 8 on the attributes of watermark. The attributes of robustness and capacity increase as the block size increases (62-70db, 31.5-35.9 bit/pixel). The time for watermarking reduces as the block size increases. The watermark is still visible for block sizes below 8 x 8 but invisible for those above it. Distortion decreases sharply from a high value at 2 x 2 block size to minimum at 8 x 8 and gradually increases with block size. The overall observation indicates that watermarked image gradually reduces in quality due to fading above 8 x 8 block size. For easy detection of image against piracy the block size 16 x 16 gives the best output result because it closely resembles the original image in terms of visual quality displayed despite the fact that it contains a hidden watermark.
Halftoning-based BTC image reconstruction using patch processing with border ...TELKOMNIKA JOURNAL
This paper presents a new halftoning-based block truncation coding (HBTC) image reconstruction using sparse representation framework. The HBTC is a simple yet powerful image compression technique, which can effectively remove the typical blocking effect and false contour. Two types of HBTC methods are discussed in this paper, i.e., ordered dither block truncation coding (ODBTC) and error diffusion block truncation coding (EDBTC). The proposed sparsity-based method suppresses the impulsive noise on ODBTC and EDBTC decoded image with a coupled dictionary containing the HBTC image component and the clean image component dictionaries. Herein, a sparse coefficient is estimated from the HBTC decoded image by means of the HBTC image dictionary. The reconstructed image is subsequently built and aligned from the clean, i.e. non-compressed image dictionary and predicted sparse coefficient. To further reduce the blocking effect, the image patch is firstly identified as “border” and “non-border” type before applying the sparse representation framework. Adding the Laplacian prior knowledge on HBTC decoded image, it yields better reconstructed image quality. The experimental results demonstrate the effectiveness of the proposed HBTC image reconstruction. The proposed method also outperforms the former schemes in terms of reconstructed image quality.
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
ROI Based Image Compression in Baseline JPEGIJERA Editor
To improve the efficiency of standard JPEG compression algorithm an adaptive quantization technique based on the support for region of interest of compression is introduced. Since this is a lossy compression technique the less important bits are discarded and are not restored back during decompression. Adaptive quantization is carried out by applying two different quantization to the picture provided by the user. The user can select any part of the image and enter the required quality for compression. If according to the user the subject is more important than the background then more quality is provided to the subject than the background and vice- versa. Adaptive quantization in baseline sequential JPEG is carried out by applying Forward Discrete Cosine Transform (FDCT), two different quantization provided by the user for compression, thereby achieving region of interest compression and Inverse Discrete Cosine Transform (IDCT) for decompression. This technique makes sure that the memory is used efficiently. Moreover we have specifically designed this for identifying defects in the leather samples clearly.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and
bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may
exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas
of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering
approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out
the discontinuities that appear across DCT block boundaries. Although some of these approaches are able
to decrease the severity of these unwanted artifacts to some extent, other approaches have certain
limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking
algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing
smoothening, detection of blocked edges and then filtering the difference between the pixels containing the
blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an
image and therefore increases the subjective as well as objective quality of the reconstructed image.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract—The data compression and decompression
play a very important role and are necessary to minimize
the storage media and increase the data transmission in
the communication channel, the images quality based on
the evaluating and analyzing different image compression
techniques applying hybrid algorithm is the important
new approach. The paper uses the hybrid technique
applied to images sets for enhancing and increasing image
compression, and also including different advantages such
as minimizing the graphics file size with keeping the image
quality in high level. In this concept, the hybrid image
compression algorithm (HCIA) is used as one integrated
compression system, HCIA has a new technique and
proven itself on the different types of file images. The
compression effectiveness is affected by the quality of
image sensitive, and the image compression process
involves the identification and removal of redundant
pixels and unnecessary elements of the source image.
The proposed algorithm is a new approach to compute
and present the high image quality to get maximization
compression [1].
In This research can be generated more space
consumption and computation for compression rate
without degrading the quality of the image, the results of
the experiment show that the improvement and accuracy
can be achieved by using hybrid compression algorithm. A
hybrid algorithm has been implemented to compress and
decompress the given images using hybrid techniques in
java package software.
Index Terms—Lossless Based Image Compression,
Redundancy, Compression Technique, Compression
Ratio, Compression Time.
Keywords
Data Compression, Hybrid Image Compression Algorithm,
Image Processing Techniques.
FAN search for image copy-move forgery-amalta 2014SondosFadl
The proposed Fan Search (FS) algorithm starts once a duplicated block is detected. Instead of exhaustive search for all blocks,the nearby blocks of the detected block are examined first in a spiral order.
The compression is a process of Image Processing which interested to change the information representation in order to reduce the stockage capacity and transmission time. In this work we propose a new image compression algorithm based on Haar wavelets by introducing a compression coefficient that controls the compression levels. This method reduces the complexity in obtaining the desired level of compression from the original image only and without using intermediate levels.
Quality Assessment of Gray and Color Images through Image Fusion TechniqueIJEEE
. Image fusion is an emerging trend in the digital image processing to enhance images. In image fusion two or more images can be fused (combined) to obtain an enhanced image. In the present work image fusion technology has been used to enhance a given input image. Image fusion is used here to combine two images which contains complementary information.
Evaluation of graphic effects embedded image compression IJECEIAES
A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
International Journal on Soft Computing ( IJSC )ijsc
A variety of new and powerful algorithms have been developed for image compression over the years.
Among them the wavelet-based image compression schemes have gained much popularity due to their
overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG
compression and multiresolution character which leads to superior energy compaction with high quality
reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding
techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree
(SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding
with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet
Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR)
algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image
Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and
the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and
discussions are presented for algorithm development and implementation.
A Review on Image Compression using DCT and DWTIJSRD
Image Compression addresses the matter of reducing the amount of data needed to represent the digital image. There are several transformation techniques used for data compression. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) is mostly used transformation. The Discrete cosine transform (DCT) is a method for transform an image from spatial domain to frequency domain. DCT has high energy compaction property and requires less computational resources. On the other hand, DWT is multi resolution transformation. The research paper includes various approaches that have been used by different researchers for Image Compression. The analysis has been carried out in terms of performance parameters Peak signal to noise ratio, Bit error rate, Compression ratio, Mean square error. and time taken for decomposition and reconstruction.
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles .
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
A Comprehensive lossless modified compression in medical application on DICOM...IOSR Journals
ABSTRACT : In current days, Digital Imaging and Communication in Medicine (DICOM) is widely used for
viewing medical images from different modalities, distribution and storage. Image processing can be processed
by photographic, optical and electronic means, because digital methods are precise, fast and flexible, image
processing using digital computers are the most common method. Image Processing can extract information,
modify pictures to improves and change their structure (image editing, composition and image compression
etc.). Image compression is the major entities of storage system and communication which is capable of
crippling disadvantages of data transmission and image storage and also capable of reducing the data
redundancy. Medical images are require to stored for future reference of the patients and their hospital findings
hence, the medical image need to undergo the process of compression before storing it. Medical images are
much important in the field of medicine, all these Medical image compression is necessary for huge database
storage in Medical Centre and medical data transfer for the purpose of diagnosis. Presently Discrete cosine
transforms (DCT), Run Length Encoding Lossless compression technique, Wavelet transforms (DWT), are the
most usefully and wider accepted approach for the purpose of compression. On basis of based on discrete
wavelet transform we present a new DICOM based lossless image compression method. In the proposed
method, each DICOM image stored in the data set is compressed on the basis of vertically, horizontally and
diagonally compression. We analyze the results from our study of all the DICOM images in the data set using
two quality measures namely PSNR and RMSE. The performance and comparison was made over each images
stored in the set of data set of DICOM images. This work is presenting the performance comparison between
input images (without compression) and after compression results for each images in the data set using DWT
method. Further the performance of DWT method with HAAR process is compared with 2D-DWT method using
the quality metrics of PSNR & RMSE. The performance of these methods for image compression has been
simulated using MATLAB.
Keywords: JPEG, DCT, DWT, SPIHT, DICOM, VQ, Lossless Compression, Wavelet Transform, image
Compression, PSNR, RMSE
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
A Novel Image Compression Approach Inexact Computingijtsrd
This work proposes a novel approach for digital image processing that relies on faulty computation to address some of the issues with discrete cosine transformation DCT compression. The proposed system has three processing stages the first employs approximated DCT for picture compression to eliminate all compute demanding floating point multiplication and to execute DCT processing with integer additions and, in certain cases, logical right left modifications. The second level reduces the amount of data that must be processed from the first level by removing frequencies that cannot be perceived by human senses. Finally, in order to reduce power consumption and delay, the third stage employs erroneous circuit level adders for DCT computation. A collection of structured pictures is compressed for measurement using the suggested three level method. Various figures of merit such as energy consumption, delay, power signal to noise ratio, average difference, and absolute maximum difference are compared to current compression techniques an error analysis is also carried out to substantiate the simulation findings. The results indicate significant gains in energy and time reduction while retaining acceptable accuracy levels for image processing applications. Sonam Kumari | Manish Rai "A Novel Image Compression Approach-Inexact Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-6 , October 2022, URL: https://www.ijtsrd.com/papers/ijtsrd52197.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/52197/a-novel-image-compression-approachinexact-computing/sonam-kumari
An optimized discrete wavelet transform compression technique for image trans...IJECEIAES
Transferring images in a wireless multimedia sensor network (WMSN) knows a fast development in both research and fields of application. Nevertheless, this area of research faces many problems such as the low quality of the received images after their decompression, the limited number of reconstructed images at the base station, and the high-energy consumption used in the process of compression and decompression. In order to fix these problems, we proposed a compression method based on the classic discrete wavelet transform (DWT). Our method applies the wavelet compression technique multiple times on the same image. As a result, we found that the number of received images is higher than using the classic DWT. In addition, the quality of the received images is much higher compared to the standard DWT. Finally, the energy consumption is lower when we use our technique. Therefore, we can say that our proposed compression technique is more adapted to the WMSN environment.
Digital image compression is a modern technology which comprises of wide range of use in different fields as in machine learning, medicine, research and many others. Many techniques exist in image processing. This paper aims at the analysis of compression using Discrete Cosine Transform (DCT) by using special methods of coding to produce enhanced results. DCT is a technique or method used to transform pixels of an image into elementary frequency component. It converts each pixel value of an image into its corresponding frequency value. There has to be a formula that has to be used during compression and it should be reversible without losing quality of the image. These formulae are for lossy and lossless compression techniques which are used in this project. The research test Magnetic Resonance Images (MRI) using a set of brain images. During program execution, original image will be inserted and then some algorithms will be performed on the image to compress it and a decompressing algorithm will execute on the compressed file to produce an enhanced lossless image.
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
In this paper, snake optimization algorithm (SOA) is used to find the optimal gains of an enhanced controller for controlling congestion problem in computer networks. M-file and Simulink platform is adopted to evaluate the response of the active queue management (AQM) system, a comparison with two classical controllers is done, all tuned gains of controllers are obtained using SOA method and the fitness function chose to monitor the system performance is the integral time absolute error (ITAE). Transient analysis and robust analysis is used to show the proposed controller performance, two robustness tests are applied to the AQM system, one is done by varying the size of queue value in different period and the other test is done by changing the number of transmission control protocol (TCP) sessions with a value of ± 20% from its original value. The simulation results reflect a stable and robust behavior and best performance is appeared clearly to achieve the desired queue size without any noise or any transmission problems.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
1. TELKOMNIKA Telecommunication, Computing, Electronics and Control
Vol. 18, No. 4, October 2020, pp. 2371~2377
ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018
DOI: 10.12928/TELKOMNIKA.v18i5.8632 2371
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
An efficient color image compression technique
Walaa M. Abd-Elhafiez1
, Wajeb Gharibi2
, Mohamed Heshmat3
1
Faculty of Science, Sohag University, Egypt
1
College of Computer Science & Information Technology, Jazan University, Kingdom of Saudi Arabia
2
School of Computing and Engineering, UMKC, MO., USA
3
Faculty of Computer Science and Information System, Sohag University, Egypt
Article Info ABSTRACT
Article history:
Received Jan 17, 2018
Revised Apr 20, 2020
Accepted May 1, 2020
We present a new image compression method to improve visual perception of
the decompressed images and achieve higher image compression ratio.
This method balances between the compression rate and image quality by
compressing the essential parts of the image-edges. The key subject/edge
is of more significance than background/non-edge image. Taking into
consideration the value of image components and the effect of smoothness
in image compression, this method classifies the image components as edge or
non-edge. Low-quality lossy compression is applied to non-edge components
whereas high-quality lossy compression is applied to edge components.
Outcomes show that our suggested method is efficient in terms of compression
ratio, bits per-pixel and peak signal to noise ratio.
Keywords:
Compression ratio
Edge detection
Image compression
JPEG
Local thresholds This is an open access article under the CC BY-SA license.
Corresponding Author:
Walaa M. Abd-Elhafiez,
Faculty of Science,
Sohag University,
82524, Sohag, Egypt.
Email: w_a_led@ yahoo.com
1. INTRODUCTION
Because of the advances in various aspects of digital electronics like image acquisition, data storage
space and display, many new applications of the digital imaging have emerged within the last decade. However,
several of those applications don't seem to be widespread as a result of needed large space of storing.
Consequently, the image compression has grown tremendously over the last decade and various image
compression algorithms have been proposed [1, 2]. Picture compression reduces the amount of data required
to represent a digital image. The reduction process is the removal of unnecessary data. It needs considerable
amount of storage capacity and transmission bandwidth to transfer multimedia material in uncompressed form.
This makes transmission slow and time-consuming. Photos transmitted over the World Wide Web are
an excellent example of why data compression is important. Compression can be distributed to lossless [3, 4]
or lossy [5, 6], relying on whether all the information is not gotten eliminate of or some of it is ignored through
the compression process [7]. In the circumstance of lossless compression, the recovered data is similar to
the original, although, for lossy compression, the restored data is a detailed look-alike of the original. Wherever
lossless compression is intended for data like in bank records, even a change of a sole character can be terrible.
Similarly, for medical or satellite pictures, if there is any loss during compression, it can lead to artifacts in
the reconstruction that may give wrong interpretation. In lossy compression, the amount of loss in the data
locates the standard of the reconstruction and does indeed not lead to change in the information content.
2. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 5, October 2020: 2371 - 2377
2372
Also it can be used for signals such as speech, natural images. Lossy compression achieved more compression
than lossless compression.
Juncai Y. and Guizhong D. [8] have been presented a new color image compression method using
human visual distinction sensitivity characteristics. Firstly, they converted the input image into YCrCb and
divided the image into sub regions. They applied DCT for each and every blocks and quantization.
3 quantization matrices have built by combining the distinction sensitivity characteristics of human being visual
system. Afterwards, they used Huffman code. L. Starosolski, [9] proposed effective color space changement
for lossless image compression. H. B. Kekre, et al. [10] introduced Image Compression system using vector
quantization and hybrid wavelet transform. Kronecker product for two various transforms can be used to create
hybrid wavelet transform. Ali H. Ahmed and Loay E. George, [11] presented color image compression
technique based on wavelet, differential pulse code modulation and quadtree coding. Recently, different image
models based on fractional total variation have been provided [12]. Space and wavelet domain damage are
used in the models for images with or without noise. Various factors like image compression, image restoration,
image coding and so on, have been discussed [13-21].
Within our proposed method, for color image compression, the edge detection and computerized
derivation of local thresholds are used. The algorithm is composed of 3 main stages where the image
is categorized using edge detection and divided into n×n blocks. Then Discrete Cosine Transform (DCT)
is used on the partitioned image with quantized coefficients that ordered using adaptive block scanning.
The variance/mean adaptive threshold will compute to eliminate weak coefficients. It will rely upon each color
space and blocks in each color space. Experimental results display advance results in compression ratio,
bits per pixel and peak signal to noise ratio for the reconstructed image. The effective of compression ratio
depending on the nature of the image file. The rest of our paper is organized as follows. In section 2, we explain
the core process that assigns local thresholds. Section 3 describes the adaptive block scanning method and
Section 4 presented the proposed image compression strategy. Results and discussion are given in section 5
and the paper came to the conclusion with section 6.
2. ADAPTIVE THRESHOLD (LOCAL MEANS AND LOCAL VARIANCES)
Thresholding techniques are often applied to segment images divide ino dark objects and bright
backgrounds, or the other way round. This also offers data compression and fast data processing [22, 23].
The easiest way is through a technique called global thresholding, where one threshold value is chosen for
the entire image which is obtained from the global information. However, once the background has non-uniform
illumination, a fixed or global threshold value will poorly segment the image. Thus, the value of local threshold
value that changes dynamically over the image is required. This technique is called adaptive thresholding. Below,
we introduce an automatic method that calculates adaptive local thresholds for the image compression. The easy
methods, Means and Variances adaptive threshold are used, which it based on local properties of the parts. Let
m(x,y), the local mean at position (x,y) of windows size w×w , m(x,y) can be computed using
the summation over all pixel values g(i,j) within that window and can be written as follows,
m(x, y)= (g(x+w/2, y+w/2)+ g(x-w/2, y-w/2) – g(x+w/2, y-w/2)- g(x-w/2, y+w/2))/ w2 (1)
Also, the computation of the local variance v (x, y) [23] is described as:
𝑣(𝑥, 𝑦) =
1
𝑤
√∑ ∑ 𝑔2(𝑖, 𝑗) − 𝑚2(𝑥, 𝑦)
𝑦+𝑤/2
𝑗=𝑦−𝑤/2
𝑥+𝑤/2
𝑖=𝑥−𝑤/2 (2)
3. ADAPTIVE BLOCK SCANNING
Intended for the aim to obtain the best possible compression ratio (CR), discrete cosine transform (DCT)
has been widely employed in image and video coding systems, where zigzag scan is usually used for DCT
coefficient organization and it is the last level of processing a compressed image in a transform coder, before it is
use in final entropy encoding step. Multiple scanning services are being used (i.e., vertical, hilbert, zigzag and
horizontal) for various spatial prediction direction on the block. However, due to local prediction errors
the standard zigzag scan is not effective all time. So, we apply our proposed effective scanning method in [24]
which centered on Sorting Method. It includes proven good on image compression rather than zigzag scan.
4. THE PROPOSED IMAGE COMPRESSION APPROACH
The input image is primarily classified into edge and non-edge portions using Canny edge detector [25].
3. TELKOMNIKA Telecommun Comput El Control
An efficient color image compression technique (Walaa M. Abd-Elhafiez)
2373
Since the Canny edge detector is a significant and traditionally used contribution to edge detection techniques.
After that the image is subdivided into 8x8 blocks and DCT coefficients are calculated for each and every block.
Then quantization process is applied, which it means reducing the number of bits by reducing the hight-frequency
coefficients least importance to zero. The quantization is performed conferring to quantization table.
The quantized values are rearranged relating to adaptive scan setup as described in section 3. If the block is
classified as edge or non-edge block then one case (a or b) will be used as described in step 6. Inside the following
two methods, a variable threshold is created that varies with both each color space (as describe in the following
CS method) and also in each block in each color space (as seen in the next DCS method). After discarding minor
coefficients, the remaining coefficients are compressed by the Huffman Encoder. Encoding color image is done
using this propoed methods:
− Method based on color space (CS):
The proposed compression algorithm of CS is constituted of eight main steps that could be summarized as follows:
Step 1: Apply canny operator for edge extraction on each color space image.
Step 2: Compute the adaptive threshold (variance/mean) for each color space to eliminate weak coefficients.
Step 3: Divide the image into 8x8 sub images.
Step 4: Apply DCT on the partitioned image (64 coefficients will be obtained: 1 DC coefficient and 63
AC coefficients.
Step 5: Quantize the coefficients.
Step 6: Classify the blocks to edge and non-edge blocks, and then used one case from the following cases:
a. For edge block, make all the coefficients (less than adaptive variance threshold/ more than adaptive mean
threshold) zeros. For non-edge block used only DC coefficient.
b. For edge and non-edge block, make all the coefficients (less than adaptive variance threshold/more than
adaptive mean threshold) zeros.
Step 7: Order the coefficients using zigzag/adaptive block scanning ordering (as in section 3).
Step 8: Apply Huffman encoding.
− Method depends on blocks in each color space (DCS):
The algorithm of the DCS can be summarized as the following steps:
Step 1: Apply canny operator for edge extraction on each color space image
Step 2: Divide the image into 8x8 sub images.
Step 3: Apply DCT on the partitioned image (64 coefficients will be obtained: 1 DC coefficient and 63
AC coefficients.
Step 4: Quantize the coefficients.
Step 5: Compute the adaptive threshold (variance/mean) for each block in each color space to eliminate
weak coefficients.
Step 6: Classify the blocks to edge and non-edge blocks, and then used one case from the next cases:
a. For edge block, make all the coefficients (more than (adaptive variance/mean threshold)) zeros. And then
for non-edge block used only DC coefficient.
b. For edge and non-edge block, make all the coefficients (more than adaptive (variance/mean) threshold)
zeros.
Step 7: Order the coefficients using zigzag/adaptive block scanning ordering (as in section 3).
Step 8: Apply Huffman encoding.
The decoding process is the invers of encoding scheme.
5. EXPERIMENTAL RESULTS
In this section, experiments are shown to demonstrate the performance of the proposed image coding
approach. Different color images in the RGB space with different characteristics are tested in the experiments
including tree, baboon and goldhill of size 256×256 and tree2, lena, barbara and airplane of size 512×512.
The various compression methodss can be compared depending on certain performance measures. Compression
ratio (CR) is outlined because the quantitative relation of the quantity of bits needed to represent the information
before compression to the quantityr of bits needed once compression. Rate is the average number of bits per
sample or pixel (bpp), in the matter of image. Distortion is quantified by a parameter known as mean square error
(MSE). MSE points to the common worth of the square error between the first signal and therefore
the reconstruction. The quality of the reconstruction is the peak signal-to-noise ratio (PSNR) is indicated
by the top parameter. PSNR is the ratio of square of the peak value of the signal to the mean square error, set
by decibels.
5.1. Non-adaptive method
All follows cases in this method are using canny edge detection and zigzag scan:
4. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 5, October 2020: 2371 - 2377
2374
− Case 1 (CS1): in this case, mean adaptive threshold for each color space is computed. For edge block make
all the coefficients (more than adaptive mean threshold) zeros. For non-edge block only DC coefficient
is used.
− Case 2 (CS2): in this case, variance adaptive threshold for each color space is computed. For edge block
make all the coefficients (less than adaptive variance threshold) zeros. For non-edge block only DC
coefficient is used.
− Case 3 (CS3): in this case, variance adaptive threshold for each color space is computed. For edge and
non-edge blocks, make all the coefficients (less than adaptive variance threshold) zeros.
− Case 4 (DCS1): in this case, mean adaptive threshold for each block in each color space is computed.
For edge block make all the coefficients (more than adaptive mean threshold) zeros. For non-edge block
only DC coefficient is used.
− Case 5 (DCS2): in this case, mean adaptive threshold for each block in each color space is computed.
For edge and non-edge blocks, make all the coefficients (more than adaptive mean threshold) zeros.
− Case 6 (DCS3): in this case, variance adaptive threshold for each block in each color space is computed.
For edge block make all the coefficients (more than adaptive variance threshold) zeros. For non-edge block
only DC coefficient is used.
− Case 7 (DCS4): in this case, variance adaptive threshold for each block in each color space is computed.
For edge and non-edge blocks, make all the coefficients (more than adaptive variance threshold) zeros.
The analysis factors of proposed non-adaptive method on different images are given in Table 1.
The results show, the utilization of variance threshold for each block in each color space (DCS 4 and DCS4-4)
has increase the CR while preserving the image quality.
Table 1. Compression ratio, bitrate and psnr values attained by non-adaptive method
IMAGE CS1 CS2 CS3 DCS1 DCS2 DCS3 DCS4
LENA PSNR 35.572 35.86 36.32 35.565 35.92 33.35 33.375
CR 19.160 16.63 14.97 18.656 17.44 41.55 41.347
BPP 1.252 1.442 1.602 1.286 1.375 0.577 0.580
FRUIT PSNR 34.952 35.21 35.61 34.935 35.23 33.16 33.186
CR 19.170 15.93 14.33 17.970 16.60 36.09 35.401
BPP 1.251 1.506 1.674 1.335 1.445 0.664 0.677
BABOON PSNR 31.275 31.59 31.78 31.239 31.38 29.70 29.714
CR 9.052 7.047 6.765 8.430 8.169 35.88 35.716
BPP 2.651 3.405 3.547 2.847 2.937 0.668 0.6720
AIRPLANE PSNR 35.573 36.84 37.08 35.969 36.06 33.24 33.244
CR 22.248 15.45 14.20 19.053 18.65 44.52 44.410
BPP 1.078 1.552 1.689 1.259 1.286 0.539 0.540
5.2. Adaptive method
Every follows cases in this method are using canny edge detection and adaptive scan:
− Case 1 (ACS1): in this case, mean adaptive threshold for each color space is computed. For edge block make
all the coefficients (more than adaptive mean threshold) zeros. For non-edge block only DC coefficient
is used.
− Case 2 (ACS2): in this case, local variance for each color space is computed, and for edge block make all
the coefficients (less than adaptive variance threshold) zeros. For non-edge block only DC coefficient
is used.
− Case 3 (ACS3): in this case, local variance for each color space is computed. For edge and non-edge blocks
make all the coefficients (less than adaptive variance threshold) zeros.
− Case 4 (ADCS1): in this case, mean adaptive threshold for each block in each color space is computed.
For edge block make all the coefficients (more than adaptive mean threshold) zeros. For non-edge block
only DC coefficient is used.
− Case 5 (ADCS2): in this case, mean adaptive threshold for each block in each color space is computed.
For edge and non-edge blocks make all the coefficients (more than adaptive mean threshold) zeros.
− Case 6 (ADCS3): in this case, local variance for each block in each color space is computed. For edge block
make all the coefficients (more than adaptive variance threshold) zeros. For non-edge block only DC
coefficient is used.
− Case 7 (ADCS4): in this case, local variance for each block in each color space is computed. For edge and
non-edge blocks make all the coefficients (more than adaptive variance threshold) zeros. Table 2 shows
the proposed method performance.
5. TELKOMNIKA Telecommun Comput El Control
An efficient color image compression technique (Walaa M. Abd-Elhafiez)
2375
The reconstructed images are shown in Figure 1. The four curves as shown in Figure 2 and Figure 3 are
demonstrate that ACS3 compression performance is higher than CS3 compression performance. Various
comparisons have recently been performed to prove the effectiveness of the presented methodology over other
similar methods [26] for color image compression, as in Table 3. The results show that compression ratio of
images are improved. The quantity of improvement is dependent greatly on the nature of the image; for images
with little non-edge blocks, such as Baboon image, the improvement is less significant, however for images with
a lot of non-edge blocks, the improvement are significant.
Table 2. Compression ratio, bitrate and psnr values attained by adaptive method
Image ACS1 ACS2 ACS3 ADCS1 ADCS2 ADCS3 ADCS4
Lena PSNR 35.572 35.864 36.320 35.5639 35.9235 33.359 33.376
CR 22.198 21.286 19.653 21.1913 19.9568 37.768 37.590
bpp 1.0812 1.1275 1.2211 1.1325 1.2026 0.6355 0.638
Housec PSNR 33.416 34.656 35.047 33.8916 34.1028 31.629 31.636
CR 17.068 15.404 14.578 15.2232 14.7152 30.115 29.994
bpp 1.406 1.5580 1.6463 1.5765 1.6310 0.7969 0.800
Tree PSNR 32.368 32.828 33.071 32.4770 32.6778 30.883 30.892
CR 13.972 13.102 12.221 12.7070 11.8170 24.972 24.773
bpp 1.717 1.8318 1.963 1.8887 2.0310 0.9610 0.968
Baboon PSNR 31.274 31.591 31.780 31.238 31.379 29.705 29.716
CR 12.026 11.115 10.724 10.508 10.165 30.395 30.274
bpp 1.995 2.159 2.23 2.283 2.361 0.789 0.792
Airplane PSNR 35.571 36.854 37.101 35.967 36.065 33.241 33.244
CR 21.762 19.658 18.545 19.983 19.578 39.167 39.085
Bpp 1.102 1.220 1.294 1.201 1.225 0.612 0.614
Figure 1. The compressed images using the proposed adaptive method
(a) (b)
Figure 2. Graphical analysis of bitrate (bpp) vs psnr with (a) lena and (b) tree image
6. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 5, October 2020: 2371 - 2377
2376
(a) (b)
Figure 3. Graphical analysis of psnr vs compression ratio with (a) lena and (b) tree image
Table 3. Comparison PSNR (DB), CR and BPP of the proposed methods
(DCS3 and ADCS) and other method
Ref. [26] Proposed method (ADCS3) Proposed method (DCS3)
Image PSNR CR bpp PSNR CR bpp PSNR CR bpp
Airplane 34.3943 37.787 0.6351 33.2417 39.1679 0.6127 33.2421 44.5202 0.539
Baboon 30.4870 20.812 1.1532 29.7059 30.3953 0.7896 29.704 35.883 0.668
lena 33.9259 36.437 0.6587 33.3598 37.7684 0.6355 33.3587 41.5566 0.577
Tree2 32.0397 27.820 0.8627 31.6790 28.0483 0.8557 - - -
House 33.5263 36.485 0.6578 - - - 33.3785 44.8697 0.534
6. CONCLUSION
Through this work, a new way of color image compression is proposed using adaptive computerized
derivation of local thresholds. The suggested approach is based upon adaptive threshold computation to remove
weak coefficients. Our approach is decomposed into several cases with different parameters. These types of cases
based on applying low quality loosy compression on non-edge areas and high quality loosey compression on edge
parts of images. Outcomes show the improvement of adaptive method over the non-adaptive method
in quantitative PSNR terms and very particularly in visual quality of the reconstructed images. As a future work,
we will implement the proposed approach on clutter background images or in case where the subject has
a mono-texture and mono-color while the background has complicated textures and colors.
REFERENCES
[1] Zhe-Ming Lu, Hui Pei, "Hybrid Image Compression Scheme Based on PVQ and DCTVQ," IEICE-Transactions on
Information and Systems Archive, vol. E88-D, no. 10, pp. 2422-2426 2006.
[2] Marta Mrak, Sonja Grgic, and Mislav Grgic, "Picture Quality Measures in Image Compression Systems," IEEE
EUROCON, 2003.
[3] David Salomon, "Data Compression, Complete Reference," Springer Verlag New York, 2007.
[4] Xiwen Owen Zhao, Zhihai HenryHe, "Lossless Image Compression Using Super-Spatial Structure Prediction,"
IEEE Signal Processing Letters, vol. 17, no .4, pp. 383-386, 2010.
[5] Eddie Batista de Lima Filho, Eduardo A. B. da Silva Murilo Bresciani de Carvalho, and Frederico Silva Pinagé,
"Universal Image Compression Using Multiscale Recurrent Patterns With Adaptive Probability Model," IEEE
Transactions on Image Processing, vol. 17, no. 4, pp. 512-527, 2008.
[6] Xin Li, Michael T. Orchard, "Edge-Directed Prediction for Lossless Compression of Natural Images," IEEE
Transactions on Image Processing, vol. 10, no. 6, pp. 813-817, 2001.
[7] K. Sayood, "Introduction to Data Compression," Harcourt India Private Limited, New Delhi, 2nd edition. 2000.
[8] Juncai Yao and Guizhong Liu, "A novel color image compression algorithm using the human visual contrast
sensitivity characteristics," Photonic Sensor, vol. 7, no. 1, pp. 72-81, 2017.
[9] R. Starosolski, "New simple and efficient color space transformations for lossless image compression," Journal of
Visual Communication and Image Representation, vol. 25, no. 5, pp. 1056-1063, 2014.
[10] H. B. Kekre, Prachi Natu, Tanuja Sarode, "Color Image Compression Using Vector Quantization and Hybrid Wavelet
Transform," Procedia Computer Science, vol. 89, pp. 778-784, 2016.
[11] Ali H., Ahmedand Loay E. George, "Research Article Color Image Compression Based on Wavelet, Differential
Pulse Code Modulation and Quadtree Coding," Resarch Journal of Applied Science, Engineering and Technology,
vol. 14, no. 2, pp. 73-79, 2017.
[12] Y. Zhang, Y. F. Pu, J. R. Hu and J. L. Zhou, "A Class of Fractional-Order Variational Image Inpainting Models,"
Appl. Math. Inf. Sci., vol. 6, no. 2, pp. 299-306, 2012.
[13] W. M. Abd-Elhafiez, Omar Reyad, M. A. Mofaddel, Mohamed Fathy, "Image Encryption Algorithm Methodology
Based on Multimapping Image Pixel," The 4th International Conference on Advanced Machine Learning
Technologies and Applications (AMLTA2019), vol. 921, pp. 645-655, 2019.
7. TELKOMNIKA Telecommun Comput El Control
An efficient color image compression technique (Walaa M. Abd-Elhafiez)
2377
[14] Yan Feng, Hua Lu, XiLiang Zeng, "A Fractal Image Compression Method Based on Multi-Wavelet," TELKOMNIKA
Telecommunication Computing Electronics and Control, vol. 13, no. 3, pp. 996-1005, 2015.
[15] Lei Zhu, Jialie Shen, Liang Xie, Zhiyong Cheng, "Unsupervised Visual Hashing with Semantic Assistant for
Content-Based Image Retrieval," IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 2,
pp. 472-486, 2017.
[16] Liang Xie, Jialie Shen, Jungong Han, Lei Zhu, Ling Shao, "Dynamic Multi-View Hashing for Online Image
Retrieval," Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 3133-3139, 2017.
[17] Mandy Douglas, Karen Bailey, Mark Leeney, Kevin Curran, "Using SVD and DWT Based Steganography to
Enhance the Security of Watermarked Fingerprint Images," TELKOMNIKA Telecommunication, Computing,
Electronics and Control, vol. 15, no. 3, pp. 1368-1379, 2017.
[18] Alexandre Zaghetto, Ricardo L. de Queiroz, "Scanned Document Compression Using Block-Based Hybrid Video
Codec," IEEE Transactions on Image Processing (TIP), vol. 22, no. 6, pp. 2420-2428, 2013.
[19] Walaa M. Abd-Elhafiez, Mohamed Heshmat, “Medical Image Encryption Via Lifting Method,” Journal of Intelligent
& Fuzzy Systems, vol. 38, no. 3, pp. 2823-2832, 2020.
[20] Qiang Zhang, and Xiaopeng Wei, "An Efficient Approach for DNA Fractal-based Image Encryption," Appl. Math.
Inf. Sci., vol. 5, no. 3, pp. 445-459, 2011.
[21] Wang Xue-guang, Chen Shu-hong, "An Improved Image Segmentation Algorithm Based on Two-Dimensional Otsu
Method," Inf. Sci. Lett., vol. 1, no. 2, pp. 77-83, 2012.
[22] A. Shio, "An Automatic Thresholding Algorithm Based On An Illumination-Independent Contrast Measure," IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, pp. 632-637, 1989.
[23] Faisal Shafait, Daniel Keysers, Thomas M. Breuel, "Efficient Implementation of Local Adaptive Thresholding
Techniques Using Integral Images," SPIE Document Recognition and Retrieval XV, 2008.
[24] W. M. Abd-Elhafiez, U. S. Mohammed, Adem K, "On High Performance Image Compression Technique,"
ScienceAsia, vol. 39, pp. 416-422, 2013.
[25] J. F. Canny, "A Computational Approach to Edge Detection," IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 8, no.6, pp. 679-698, 1986.
[26] W. M. Abd-Elhafiez, Wajeb Gharibi, "Color Image Compression Algorithm Based on DCT Blocks," International
Journal of Computer Science, vol. 9, no. 4, pp. 323-328, 2012.
BIOGRAPHIES OF AUTHORS
Walaa M. Abd-Elhafiez received her B.Sc. and M.Sc. degrees from south valley university,
Sohag branch, Sohag, Egypt in 2002 and from Sohag University, Sohag, Egypt, Jan 2007,
respectively, and her Ph.D. degree from Sohag University, Sohag, Egypt. Her research interests
include image segmentation, image enhancement, image recognition, image coding, video
coding, and their applications in image processing.
Wajeb Gharibi, Professor of Computer Science. He got his Ph. D from Belarus Academy of
Sciences in 1990. His research interests include Cybersecurity, Machine Learning, Software
Engineering, Quantum Computing and Optimization. He has more than 135 published research
papers in reputed journals and conferences.
Mohamed Heshmat, received his B. Sc. And M. Sc. Degrees from South Valley University,
Sohag branch, Sohag, Egypt in 2002 and his Ph.D. degree from Sohag University, Sohag,
Egypt and Bauhaus-University, Weimar, Germany, 2010. His research interests include
computer vision, 3D data acquisition, object reconstruction, image segmentation, image
enhancement and image recognition.