This paper presents a new halftoning-based block truncation coding (HBTC) image reconstruction using sparse representation framework. The HBTC is a simple yet powerful image compression technique, which can effectively remove the typical blocking effect and false contour. Two types of HBTC methods are discussed in this paper, i.e., ordered dither block truncation coding (ODBTC) and error diffusion block truncation coding (EDBTC). The proposed sparsity-based method suppresses the impulsive noise on ODBTC and EDBTC decoded image with a coupled dictionary containing the HBTC image component and the clean image component dictionaries. Herein, a sparse coefficient is estimated from the HBTC decoded image by means of the HBTC image dictionary. The reconstructed image is subsequently built and aligned from the clean, i.e. non-compressed image dictionary and predicted sparse coefficient. To further reduce the blocking effect, the image patch is firstly identified as “border” and “non-border” type before applying the sparse representation framework. Adding the Laplacian prior knowledge on HBTC decoded image, it yields better reconstructed image quality. The experimental results demonstrate the effectiveness of the proposed HBTC image reconstruction. The proposed method also outperforms the former schemes in terms of reconstructed image quality.
A COMPARATIVE STUDY ON IMAGE COMPRESSION USING HALFTONING BASED BLOCK TRUNCAT...IJCI JOURNAL
This document summarizes a research paper that compares different techniques for image compression using halftoning-based block truncation coding (HBTC) for color images. It describes the basic block truncation coding (BTC) method and its limitations. It then discusses several HBTC methods - ordered dither block truncation coding, error diffusion block truncation coding, and dot diffused block truncation coding - and evaluates their performance based on metrics like PSNR, MSE, compression ratio, and SSIM. The results show that dot diffused block truncation coding provides better visual quality than the other methods.
The document discusses a method for compressing color images using block truncation coding (BTC) and genetic algorithms. BTC works by dividing images into blocks and quantizing each block to a high or low value based on the block's mean. This reduces quality issues with BTC. Color images have correlated red, green, and blue planes. The method uses a common bit plane optimized with genetic algorithms to represent all three color planes, improving quality and compression ratio compared to standard BTC and error diffused BTC. Experimental results showed the proposed method provided higher quality reconstructed images as measured by peak signal-to-noise ratio.
Improved block based segmentation for jpeg compressed document imageseSAT Journals
Abstract
Image Compression is to minimize the size in bytes of a graphics file without degrading the quality of the image to an unacceptable
level. The compound image compression normally based on three classification methods that is object based, layer based and block
based. This paper presents a block-based segmentation. for visually lossless compression of scanned documents that contain not only
photographic images but also text and graphic images. In low bit rate applications they suffer with undesirable compression artifacts,
especially for document images. Existing methods can reduce these artifacts by using post processing methods without changing the
encoding process. Some of these post processing methods requires classification of the encoded blocks into different categories.
Keywords- AC energy, Discrete Cosine Transform (DCT), JPEG, K-means clustering, Threshold value
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and
bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may
exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas
of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering
approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out
the discontinuities that appear across DCT block boundaries. Although some of these approaches are able
to decrease the severity of these unwanted artifacts to some extent, other approaches have certain
limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking
algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing
smoothening, detection of blocked edges and then filtering the difference between the pixels containing the
blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an
image and therefore increases the subjective as well as objective quality of the reconstructed image.
Performance Analysis of Compression Techniques Using SVD, BTC, DCT and GPIOSR Journals
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. The techniques are tested on biometric images from iris, fingerprint, and palm print databases to evaluate image quality after compression.
An improved image compression algorithm based on daubechies wavelets with ar...Alexander Decker
This document summarizes an academic article that proposes a new image compression algorithm using Daubechies wavelets and arithmetic coding. It first discusses existing image compression techniques and their limitations. It then describes the proposed algorithm, which applies Daubechies wavelet transform followed by 2D Walsh wavelet transform on image blocks and arithmetic coding. Results show the proposed method achieves higher compression ratios and PSNR values than existing algorithms like EZW and SPIHT. Future work aims to improve results by exploring different wavelets and compression techniques.
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGEijcsity
It is shown that neural networks (NNs) achieve excellent performances in image compression and reconstruction. However, there are still many shortcomings in the practical application, which eventually lead to the loss of neural network image processing ability. Based on this, a joint framework based on neural network and scale compression is proposed in this paper. The framework first encodes the incoming PNG image information, and then the image is converted into binary input decoder to reconstruct the intermediate state image, next, we import the intermediate state image into the zooming compressor and repressurize it, and reconstruct the final image. From the experimental results, this method can better process the digital image and suppress the reverse expansion problem, and the compression effect can be improved by 4 to 10 times as much as that of using RNN alone, showing better ability in the application. In this paper, the method is transmitted over a digital image, the effect is far better than the existing compression method alone, the Human visual system cannot feel the change of the effect.
A COMPARATIVE STUDY ON IMAGE COMPRESSION USING HALFTONING BASED BLOCK TRUNCAT...IJCI JOURNAL
This document summarizes a research paper that compares different techniques for image compression using halftoning-based block truncation coding (HBTC) for color images. It describes the basic block truncation coding (BTC) method and its limitations. It then discusses several HBTC methods - ordered dither block truncation coding, error diffusion block truncation coding, and dot diffused block truncation coding - and evaluates their performance based on metrics like PSNR, MSE, compression ratio, and SSIM. The results show that dot diffused block truncation coding provides better visual quality than the other methods.
The document discusses a method for compressing color images using block truncation coding (BTC) and genetic algorithms. BTC works by dividing images into blocks and quantizing each block to a high or low value based on the block's mean. This reduces quality issues with BTC. Color images have correlated red, green, and blue planes. The method uses a common bit plane optimized with genetic algorithms to represent all three color planes, improving quality and compression ratio compared to standard BTC and error diffused BTC. Experimental results showed the proposed method provided higher quality reconstructed images as measured by peak signal-to-noise ratio.
Improved block based segmentation for jpeg compressed document imageseSAT Journals
Abstract
Image Compression is to minimize the size in bytes of a graphics file without degrading the quality of the image to an unacceptable
level. The compound image compression normally based on three classification methods that is object based, layer based and block
based. This paper presents a block-based segmentation. for visually lossless compression of scanned documents that contain not only
photographic images but also text and graphic images. In low bit rate applications they suffer with undesirable compression artifacts,
especially for document images. Existing methods can reduce these artifacts by using post processing methods without changing the
encoding process. Some of these post processing methods requires classification of the encoded blocks into different categories.
Keywords- AC energy, Discrete Cosine Transform (DCT), JPEG, K-means clustering, Threshold value
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and
bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may
exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas
of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering
approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out
the discontinuities that appear across DCT block boundaries. Although some of these approaches are able
to decrease the severity of these unwanted artifacts to some extent, other approaches have certain
limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking
algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing
smoothening, detection of blocked edges and then filtering the difference between the pixels containing the
blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an
image and therefore increases the subjective as well as objective quality of the reconstructed image.
Performance Analysis of Compression Techniques Using SVD, BTC, DCT and GPIOSR Journals
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. The techniques are tested on biometric images from iris, fingerprint, and palm print databases to evaluate image quality after compression.
An improved image compression algorithm based on daubechies wavelets with ar...Alexander Decker
This document summarizes an academic article that proposes a new image compression algorithm using Daubechies wavelets and arithmetic coding. It first discusses existing image compression techniques and their limitations. It then describes the proposed algorithm, which applies Daubechies wavelet transform followed by 2D Walsh wavelet transform on image blocks and arithmetic coding. Results show the proposed method achieves higher compression ratios and PSNR values than existing algorithms like EZW and SPIHT. Future work aims to improve results by exploring different wavelets and compression techniques.
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGEijcsity
It is shown that neural networks (NNs) achieve excellent performances in image compression and reconstruction. However, there are still many shortcomings in the practical application, which eventually lead to the loss of neural network image processing ability. Based on this, a joint framework based on neural network and scale compression is proposed in this paper. The framework first encodes the incoming PNG image information, and then the image is converted into binary input decoder to reconstruct the intermediate state image, next, we import the intermediate state image into the zooming compressor and repressurize it, and reconstruct the final image. From the experimental results, this method can better process the digital image and suppress the reverse expansion problem, and the compression effect can be improved by 4 to 10 times as much as that of using RNN alone, showing better ability in the application. In this paper, the method is transmitted over a digital image, the effect is far better than the existing compression method alone, the Human visual system cannot feel the change of the effect.
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
Modified Skip Line Encoding for Binary Image Compressionidescitation
This paper proposes a modified skip line encoding technique for lossless compression of binary images. Skip line encoding exploits correlation between successive scan lines by encoding only one line and skipping similar lines. The proposed technique improves upon existing skip line encoding by allowing a scan line to be skipped if a similar line exists anywhere in the image, rather than just successive lines. Experimental results on sample images show the modified technique achieves higher compression ratios than conventional skip line encoding.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
The document presents a new efficient color image compression technique that aims to improve the quality of decompressed images while achieving higher compression ratios. It does this by compressing important edge parts of the image differently than non-edge background parts. Specifically, it applies low-quality lossy compression to non-edge parts and high-quality lossy compression to edge parts. The technique uses edge detection, adaptive thresholding based on local variance and mean, and discrete cosine transform followed by quantization and entropy encoding. Experimental results on various images show it achieves better compression ratios, lower bit rates, and higher peak signal to noise ratios compared to non-adaptive methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
Survey paper on image compression techniquesIRJET Journal
This document summarizes and compares several popular image compression techniques: wavelet compression, JPEG/DCT compression, vector quantization (VQ), fractal compression, and genetic algorithm compression. It finds that all techniques perform satisfactorily at 0.5 bits per pixel, but for very low bit rates like 0.25 bpp, wavelet compression techniques like EZW perform best in terms of compression ratio and quality. Specifically, EZW and JPEG are more practical than others at low bit rates. The document also notes advantages and disadvantages of each technique and concludes hybrid approaches may achieve even higher compression ratios while maintaining image quality.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
This document discusses GPU-based image compression and interpolation using anisotropic diffusion. It presents a method for image compression using binary tree triangular coding to store pixel coordinates. For decompression and interpolation, a partial differential equation (PDE) method called Perona and Malik diffusion is used. Performance of the PDE-based interpolation algorithm is evaluated on CPU and GPU architectures, demonstrating that the GPU implementation significantly reduces computation time, especially for higher resolution images.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
A systematic image compression in the combination of linear vector quantisati...eSAT Publishing House
1) The document presents a method for image compression that combines linear vector quantization and discrete wavelet transform.
2) Linear vector quantization is used to generate codebooks and encode image blocks, achieving better PSNR and MSE than self-organizing maps.
3) The encoded blocks are then subjected to discrete wavelet transform. Low-low subbands are stored for reconstruction while other subbands are discarded.
4) Experimental results show the proposed method achieves higher PSNR and lower MSE than existing techniques, preserving both texture and edge information.
Secure High Capacity Data Hiding in Images using EDBTCijsrd.com
Block truncation coding is an efficient compression technique which has low computational complexity but it has two major issues like blocking and false counter effects .So here we have used error-diffused BTC that improves above deficiencies using visual low pass compensation on the bitmap. In this paper complementary hiding EDBTC is developed to resolve the above issue. In this project a single water mark is embedded and then multiple water marks are embedded .usually we use an adaptive external bias factor to embed the watermark but it damages the image quality and robustness .So here we use an extremely small bias factor to control the watermark embedding and this enables a high capacity scenario without significantly damaging image quality. Until now a few data hiding schemes are proposed but it damages the characteristics of BTC. The security of embedded water mark is high that it can’t be easy extracted by the malicious users. The watermark is encrypted by standard encryption algorithm and then it is embedded.
IRJET- Efficient JPEG Reconstruction using Bayesian MAP and BFMTIRJET Journal
This document discusses efficient JPEG reconstruction using Bayesian MAP and BFMT. It proposes using a Bayesian maximum a posteriori probability approach with an alternating direction method of multipliers iterative optimization algorithm. Specifically, it uses a learned frame prior and models the quantization noise as Gaussian. It also proposes using bilateral filter and its method noise thresholding using wavelets for image denoising as part of the JPEG reconstruction process. Experimental results show this approach improves reconstruction quality both visually and in terms of signal-to-noise ratio compared to other existing methods.
A REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC ijcsity
In this paper scrutinizes image compression using Halftoning Based Block Truncation Coding for color image. Many algorithms were selected likely the original Block Truncation coding, Ordered Dither Block Truncation Coding, Error Diffusion Block Truncation Coding , and Dot Diffused Block Truncation Coding . These above techniques are divided image into non overlapping blocks. BTC acts as the basic compression technique but it exhibits two disadvantages such as the false contour and blocking effect. Hence halftoning based block truncation coding (HBTC) is used to overcome the two issues. Objective measures are used to evaluate the image degree of excellence such as Peak Signal to Noise Ratio, Mean Square Error, Structural Similarity Index and Compression Ratio. At the end, conclusions have shown that the Dot Diffused Block Truncation Coding algorithm outperforms the Block Truncation Coding as well as Error Diffusion Block Truncation Coding.
Review On Fractal Image Compression TechniquesIRJET Journal
This document reviews different techniques for fractal image compression. It discusses how fractal image compression works by dividing an image into range and domain blocks. It then summarizes several papers that propose techniques for fractal image compression using discrete cosine transform (DCT) or discrete wavelet transform (DWT). These techniques aim to improve compression ratio and reduce encoding time. Finally, the document proposes a new method that combines wavelets with fractal image compression to further increase compression ratio while maintaining low image quality losses during decompression.
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
This document discusses image compression using discrete wavelet transform (DWT) and principal component analysis (PCA). It first reviews several related works that use transforms like curvelet, wavelet and discrete cosine transform for image compression. It then describes preprocessing the input image using DWT to decompose it into sub-bands, and applying PCA on the high-frequency sub-bands to reduce dimensions and compress the image while preserving important boundaries. The algorithm is implemented and evaluated based on metrics like peak signal-to-noise ratio, standard deviation and entropy. Results show 95% accuracy in image identification from a database, though processing time increases significantly with database size.
One of advanced video coding
standard is scalable video coding (SVC)
extension of H.264/AVC, SVC provides
multimedia service within variable
transport environments. SVC is a large
computation complexity in encoding
processing, through using the exhaustive
search technique. The encoding processing
included the select macroblock mode and
motion vector layer. This paper introduces a
new proposed algorithm to reduce this
complexity with saving quality. Scalable
quality inter-layer performance enhancement
(SQILPE) proposed algorithm depends on
analysis amount of change in intensity value
of pixels in the MB and video statistics.
Experimental results show that the proposed
fast mode decision algorithm can achieve
computational savings up to 77.6% with
almost no loss in quality.
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. Experiments were conducted on biometric image databases to compress and reconstruct images using these four techniques, and the results were evaluated and compared based on the above-mentioned metrics.
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
Modified Skip Line Encoding for Binary Image Compressionidescitation
This paper proposes a modified skip line encoding technique for lossless compression of binary images. Skip line encoding exploits correlation between successive scan lines by encoding only one line and skipping similar lines. The proposed technique improves upon existing skip line encoding by allowing a scan line to be skipped if a similar line exists anywhere in the image, rather than just successive lines. Experimental results on sample images show the modified technique achieves higher compression ratios than conventional skip line encoding.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
The document presents a new efficient color image compression technique that aims to improve the quality of decompressed images while achieving higher compression ratios. It does this by compressing important edge parts of the image differently than non-edge background parts. Specifically, it applies low-quality lossy compression to non-edge parts and high-quality lossy compression to edge parts. The technique uses edge detection, adaptive thresholding based on local variance and mean, and discrete cosine transform followed by quantization and entropy encoding. Experimental results on various images show it achieves better compression ratios, lower bit rates, and higher peak signal to noise ratios compared to non-adaptive methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
Survey paper on image compression techniquesIRJET Journal
This document summarizes and compares several popular image compression techniques: wavelet compression, JPEG/DCT compression, vector quantization (VQ), fractal compression, and genetic algorithm compression. It finds that all techniques perform satisfactorily at 0.5 bits per pixel, but for very low bit rates like 0.25 bpp, wavelet compression techniques like EZW perform best in terms of compression ratio and quality. Specifically, EZW and JPEG are more practical than others at low bit rates. The document also notes advantages and disadvantages of each technique and concludes hybrid approaches may achieve even higher compression ratios while maintaining image quality.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
This document discusses GPU-based image compression and interpolation using anisotropic diffusion. It presents a method for image compression using binary tree triangular coding to store pixel coordinates. For decompression and interpolation, a partial differential equation (PDE) method called Perona and Malik diffusion is used. Performance of the PDE-based interpolation algorithm is evaluated on CPU and GPU architectures, demonstrating that the GPU implementation significantly reduces computation time, especially for higher resolution images.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
A systematic image compression in the combination of linear vector quantisati...eSAT Publishing House
1) The document presents a method for image compression that combines linear vector quantization and discrete wavelet transform.
2) Linear vector quantization is used to generate codebooks and encode image blocks, achieving better PSNR and MSE than self-organizing maps.
3) The encoded blocks are then subjected to discrete wavelet transform. Low-low subbands are stored for reconstruction while other subbands are discarded.
4) Experimental results show the proposed method achieves higher PSNR and lower MSE than existing techniques, preserving both texture and edge information.
Secure High Capacity Data Hiding in Images using EDBTCijsrd.com
Block truncation coding is an efficient compression technique which has low computational complexity but it has two major issues like blocking and false counter effects .So here we have used error-diffused BTC that improves above deficiencies using visual low pass compensation on the bitmap. In this paper complementary hiding EDBTC is developed to resolve the above issue. In this project a single water mark is embedded and then multiple water marks are embedded .usually we use an adaptive external bias factor to embed the watermark but it damages the image quality and robustness .So here we use an extremely small bias factor to control the watermark embedding and this enables a high capacity scenario without significantly damaging image quality. Until now a few data hiding schemes are proposed but it damages the characteristics of BTC. The security of embedded water mark is high that it can’t be easy extracted by the malicious users. The watermark is encrypted by standard encryption algorithm and then it is embedded.
IRJET- Efficient JPEG Reconstruction using Bayesian MAP and BFMTIRJET Journal
This document discusses efficient JPEG reconstruction using Bayesian MAP and BFMT. It proposes using a Bayesian maximum a posteriori probability approach with an alternating direction method of multipliers iterative optimization algorithm. Specifically, it uses a learned frame prior and models the quantization noise as Gaussian. It also proposes using bilateral filter and its method noise thresholding using wavelets for image denoising as part of the JPEG reconstruction process. Experimental results show this approach improves reconstruction quality both visually and in terms of signal-to-noise ratio compared to other existing methods.
A REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC ijcsity
In this paper scrutinizes image compression using Halftoning Based Block Truncation Coding for color image. Many algorithms were selected likely the original Block Truncation coding, Ordered Dither Block Truncation Coding, Error Diffusion Block Truncation Coding , and Dot Diffused Block Truncation Coding . These above techniques are divided image into non overlapping blocks. BTC acts as the basic compression technique but it exhibits two disadvantages such as the false contour and blocking effect. Hence halftoning based block truncation coding (HBTC) is used to overcome the two issues. Objective measures are used to evaluate the image degree of excellence such as Peak Signal to Noise Ratio, Mean Square Error, Structural Similarity Index and Compression Ratio. At the end, conclusions have shown that the Dot Diffused Block Truncation Coding algorithm outperforms the Block Truncation Coding as well as Error Diffusion Block Truncation Coding.
Review On Fractal Image Compression TechniquesIRJET Journal
This document reviews different techniques for fractal image compression. It discusses how fractal image compression works by dividing an image into range and domain blocks. It then summarizes several papers that propose techniques for fractal image compression using discrete cosine transform (DCT) or discrete wavelet transform (DWT). These techniques aim to improve compression ratio and reduce encoding time. Finally, the document proposes a new method that combines wavelets with fractal image compression to further increase compression ratio while maintaining low image quality losses during decompression.
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
This document discusses image compression using discrete wavelet transform (DWT) and principal component analysis (PCA). It first reviews several related works that use transforms like curvelet, wavelet and discrete cosine transform for image compression. It then describes preprocessing the input image using DWT to decompose it into sub-bands, and applying PCA on the high-frequency sub-bands to reduce dimensions and compress the image while preserving important boundaries. The algorithm is implemented and evaluated based on metrics like peak signal-to-noise ratio, standard deviation and entropy. Results show 95% accuracy in image identification from a database, though processing time increases significantly with database size.
One of advanced video coding
standard is scalable video coding (SVC)
extension of H.264/AVC, SVC provides
multimedia service within variable
transport environments. SVC is a large
computation complexity in encoding
processing, through using the exhaustive
search technique. The encoding processing
included the select macroblock mode and
motion vector layer. This paper introduces a
new proposed algorithm to reduce this
complexity with saving quality. Scalable
quality inter-layer performance enhancement
(SQILPE) proposed algorithm depends on
analysis amount of change in intensity value
of pixels in the MB and video statistics.
Experimental results show that the proposed
fast mode decision algorithm can achieve
computational savings up to 77.6% with
almost no loss in quality.
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. Experiments were conducted on biometric image databases to compress and reconstruct images using these four techniques, and the results were evaluated and compared based on the above-mentioned metrics.
Feature Extraction in Content based Image Retrievalijcnes
This document discusses feature extraction methods for content-based image retrieval (CBIR). It describes Order Dither Block Truncation Coding (ODBTC), an image compression technique that can be used to extract image features without decoding. The proposed CBIR system extracts two features from ODBTC-encoded images: Color Co-occurrence Feature (CCF) and Bit Pattern Feature (BPF). CCF is extracted from color quantizers produced by ODBTC, while BPF is based on a bit pattern codebook generated from ODBTC bitmap images. Experimental results show this approach provides superior retrieval accuracy compared to earlier CBIR methods.
Post-Segmentation Approach for Lossless Region of Interest Codingsipij
This paper presents a lossless region of interest coding technique that is suitable for interactive telemedicine over networks. The new encoding scheme allows a server to transmit only a part of a compressed image data progressively as a client requests it. This technique is different from region scalable coding in JPEG2000 since it does not define region of interest (ROI) when encoding occurs. In the proposed method, the image is fully encoded and stored in the server. It also allows a user to select a ROI after the compression is done. This feature is the main contribution of research. The proposed coding method achieves the region scalable coding by using the integer wavelet lifting, successive quantization, and partitioning that rearranges the wavelet coefficients into subsets. Each subset that represents a local area in an image is then separately coded using run-length and entropy coding. In this paper, we will show the benefits of using the proposed technique with examples and simulation results.
This document summarizes an article that proposes modifications to the JPEG 2000 image compression standard to achieve higher compression ratios while maintaining acceptable error rates. The proposed Adaptive JPEG 2000 technique involves pre-processing images with a transfer function to make them more suitable for compression by JPEG 2000. This is intended to provide higher compression ratios than the original JPEG 2000 standard while keeping root mean square error within allowed thresholds. The document provides background on JPEG 2000 and lossy image compression techniques, describes the proposed pre-processing approach, and indicates it was tested on single-channel images.
A robust combination of dwt and chaotic function for image watermarkingijctet
This document summarizes a research paper on a robust image watermarking technique that combines discrete wavelet transform (DWT) and a chaotic function. The proposed method embeds a watermark into selected blocks of the low-frequency DWT subband of an image. It calculates the Euclidean distance between blocks of the watermark and image to select the most similar block for embedding. Experimental results on standard test images show the proposed method achieves better performance than previous methods in terms of PSNR and structural similarity under compression attacks. The extraction accuracy remains high even with noise attacks, though it degrades more under filtering attacks.
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
1) The document proposes a new simple method to reduce visual block artifacts in images compressed using DCT (used in JPEG) for urban surveillance systems.
2) The method smooths only the connection edges between adjacent blocks while keeping other image areas unchanged.
3) Simulation results show the proposed method achieves better image quality as measured by PSNR compared to median and wiener filters, while using significantly less computational resources.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
Improving Performance of Multileveled BTC Based CBIR Using Sundry Color SpacesCSCJournals
The paper presents an extension of content based image retrieval (CBIR) techniques based on multilevel Block Truncation Coding (BTC) using nine sundry color spaces. Block truncation coding based features is one of the CBIR methods proposed using color features of image. The approach basically considers red, green and blue planes of an image to compute feature vector. This BTC based CBIR can be extended as multileveled BTC for performance improvement in image retrieval. The paper extends the multileveled BTC using RGB color space to other nine color spaces. The CBIR techniques like BTC Level-1, BTC Level-2, BTC Level-3 and BTC Level-4 are applied using various color spaces to analyze and compare their performances. The CBIR techniques are tested on generic image database of 1000 images spread across 11 categories. For each CBIR technique, 55 queries (5 per category) are fired on extended Wang generic image database to compute average precision and recall for all queries. The results have shown the performance improvement (ie., higher precision and recall values) with BTC-CBIR methods using luminance-chrominance color spaces (YCgCb, Kekre’s LUV, YUV, YIQ, YCbCr) as compared to non-luminance (RGB, HSI, HSV, rgb , XYZ) Color spaces. The performance of multileveled BTC-CBIR increases gradually with increase in level up to certain extent (Level 3) and then increases slightly due to voids being created at higher levels. In all levels of BTC Kekre’s LUV color space gives best performance
A Review on Image Compression using DCT and DWTIJSRD
This document reviews image compression techniques using discrete cosine transform (DCT) and discrete wavelet transform (DWT). It discusses how DCT transforms images from spatial to frequency domains, allowing for energy compaction and efficient encoding. DWT is a multi-resolution technique that represents images at different frequency bands. The document analyzes various studies that have used DCT and DWT for compression and compares their performance in terms of metrics like peak signal-to-noise ratio and compression ratio. It finds that DWT generally provides better compression performance than DCT, though DCT requires less computational resources. A hybrid DCT-DWT technique is also proposed to combine the advantages of both methods.
Super-Spatial Structure Prediction Compression of Medicalijeei-iaes
The demand to preserve raw image data for further processing has been increased with the hasty growth of digital technology. In medical industry the images are generally in the form of sequences which are much correlated. These images are very important and hence lossless compression Technique is required to reduce the number of bits to store these image sequences and take less time to transmit over the network The proposed compression method combines Super-Spatial Structure Prediction with inter-frame coding that includes Motion Estimation and Motion Compensation to achieve higher compression ratio. Motion Estimation and Motion Compensation is made with the fast block-matching process Inverse Diamond Search method. To enhance the compression ratio we propose a new scheme Bose, Chaudhuri and Hocquenghem (BCH). Results are compared in terms of compression ratio and Bits per pixel to the prior arts. Experimental results of our proposed algorithm for medical image sequences achieve 30% more reduction than the other state-of-the-art lossless image compression methods.
3 d discrete cosine transform for image compressionAlexander Decker
1. The document discusses 3D discrete cosine transform (DCT) for image compression. 3D-DCT takes a sequence of frames and divides them into 8x8x8 cubes.
2. Each cube is independently encoded using 3D-DCT, quantization, and entropy encoding. This concentrates information in low frequencies.
3. The technique achieves a compression ratio of around 27 for a set of 8 frames, better than 2D JPEG. By exploiting correlations in both spatial and temporal dimensions, 3D-DCT allows for improved compression over 2D transforms.
Similar to Halftoning-based BTC image reconstruction using patch processing with border constraint (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Halftoning-based BTC image reconstruction using patch processing with border constraint
1. TELKOMNIKA Telecommunication, Computing, Electronics and Control
Vol. 18, No. 1, February 2020, pp. 394~406
ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018
DOI: 10.12928/TELKOMNIKA.v18i1.12837 394
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
Halftoning-based BTC image reconstruction using patch
processing with border constraint
Heri Prasetyo1
, Chih-Hsien Hsia2
, Berton Arie Putra Akardihas3
1,3
Department of Informatics, Universitas Sebelas Maret (UNS), Indonesia
2
Department of Computer Science and Information Engineering, National Ilan University, Taiwan
Article Info ABSTRACT
Article history:
Received Apr 4, 2019
Revised Nov 19, 2019
Accepted Nov 30, 2019
This paper presents a new halftoning-based block truncation coding (HBTC)
image reconstruction using sparse representation framework. The HBTC is a
simple yet powerful image compression technique, which can effectively
remove the typical blocking effect and false contour. Two types of HBTC
methods are discussed in this paper, i.e., ordered dither block truncation coding
(ODBTC) and error diffusion block truncation coding (EDBTC).
The proposed sparsity-based method suppresses the impulsive noise on
ODBTC and EDBTC decoded image with a coupled dictionary containing
the HBTC image component and the clean image component dictionaries.
Herein, a sparse coefficient is estimated from the HBTC decoded image by
means of the HBTC image dictionary. The reconstructed image is
subsequently built and aligned from the clean, i.e. non-compressed image
dictionary and predicted sparse coefficient. To further reduce the blocking
effect, the image patch is firstly identified as “border” and “non-border” type
before applying the sparse representation framework. Adding the Laplacian
prior knowledge on HBTC decoded image, it yields better reconstructed image
quality. The experimental results demonstrate the effectiveness of
the proposed HBTC image reconstruction. The proposed method also
outperforms the former schemes in terms of reconstructed image quality.
Keywords:
Error diffusion
Halftoning-BTC
Order dithering
Sparse representation
This is an open access article under the CC BY-SA license.
Corresponding Author:
Heri Prasetyo,
Department of Informatics,
Universitas Sebelas Maret (UNS),
Surakarta, Indonesia.
Email: heri.prasetyo@staff.uns.ac.id
1. INTRODUCTION
Block truncation coding (BTC) and its variants have been playing an important role on image
processing and computer vision applications, such as image/video compression [1-3], image
watermarking [4, 5], data hiding [3, 6], image retrieval and classification [7-10], image restoration, [11-13] etc.
Many efforts have been focused on further improving the performance of BTC and its variants, including
the computational complexity reduction, decoded image quality improvement, and its applications, as reported
in [1, 2, 7-9, 12, 13]. The BTC-based image compression finds a new representation of an image to further
reduce the storage requirement, and achieve a satisfactory coding gain. It is classified as a lossy image
compression, in which a given image block is processed to yield a new representation consisting of two color
quantizers and the corresponding bitmap image. The two color quantizers and bitmap image produced at
the encoding stage are then transmitted to the decoder. The typical BTC techniques determine the two color
quantizers, namely low and high means, by maintaining the intrinsic statistical properties of an image such as
2. TELKOMNIKA Telecommun Comput El Control
Halftoning-based BTC image reconstruction using patch processing with border constraint (Heri Prasetyo)
395
first moment, second moment, etc. The corresponding bitmap image is simply obtained by applying
the thresholding operation on each image block with the mean value of this processed block. This bitmap image
consists of two binary values (0 and 1), in which the value 0 is replaced with the low mean value, whereas
the value 1 is substituted with high mean value, in the decoding process.
The BTC-based image compression can provide low computational complexity, however, it often
suffers from the blocking effect and false contour issues [1, 2, 7, 12]. These problems make it less satisfactory
for human perception. A new type of technique, namely halftoning-based block truncation coding (HBTC),
has been proposed to overcome these problems. Figure 1 depicts the schematic diagram of HBTC technique.
The HBTC substitutes the BTC bitmap image with the halftone image produced from specific image halftoning
methods such as void-and-cluster halftoning [1], dithering approach [7-9], error diffusion technique [2, 10, 11],
dot diffused halftoning [12], etc. This technique compensates the false contour and blocking effect problems
by enjoying peculiar dither effects from the halftone image. In addition, the HBTC offers a lower computational
complexity during the process of the two color quantizers determination. Herein, the color quantizers are
simply replaced with the minimum and maximum pixel values found in an image block. Two popular HBTC
methods, namely the ordered dithered block truncation coding (ODBTC) [1] and error diffusion block
truncation coding (EDBTC) [2], have been developed and reported in the literature. The ODBTC and EDBTC
change the BTC bitmap image with the halftone image produced from the ordered dithering and error diffused
halftoning methods, respectively. Both of the ODBTC and EDBTC schemes yield better image quality
compared to that of the classical BTC method as reported in [1, 2]. The two methods can be applied
to other image processing and computer vision applications, including low computational image
compression [1, 2, 11], content-based image retrieval [7-10], recognition of color building [14, 15], blood
image analysis [16], object detection and tracking [17], etc.
Input Image
Bitmap Image
Generation
Quantizer
Determination
Halftoning-BTC
Decoding
Min and Max Quantizer
Bitmap Image
Decoded ImageTransmission Channel
Figure 1. Schematic diagram of the halftoning-based BTC
Although the ODBTC and EDBTC significantly reduce the blocking effect and false contour issues
occurred in classical BTC technique, the impulsive noise is always present at considerably high level. To reduce
the impulsive noise and mitigate the boundary effect, an additional step can be applied for the HBTC decoded
images. The noise filtering is a simple and naïve approach to suppress the appeared impulsive noise, in which
a specific window size and kernel value are applied. The Gaussian filter is an example of noise filtering. It
performs global filtering, i.e. all pixels are processed in the same manner regardless their statistical intrinsic
properties. However, it has a limited effect in reducing the noise levels. An extended Gaussian filtering has
been proposed in [13], namely variance classified filtering. This approach applies various kernel functions for
various pixels, and the choice of kernel function is determined by the variance within an image block. In this
particular method, a set of kernel functions can be iteratively offline-trained using the least-mean-squared
(LMS) over various images training set. These set of kernel functions are recorded as a look-up-table (LUT)
for further usage. As reported in [13], the variance classified filter yields a significantly better image quality
compared to that of the global Gaussian lowpass filtering with the trade-off of higher storage requirement.
The sparse representation learns an over-complete dictionary from a set of image patches as training
data [18]. The K-Singular value decomposition (KSVD) sparse representation technique [19, 20] offers stable
results over the existing convex relaxation approaches for the sparse coding learning and approximation. As
reported in [19], the KSVD approach outperforms the matching pursuit [21], orthogonal matching pursuit [22],
basis pursuit [23], and maximum a priori (MAP) approach [24]. The sparse representation has been
demonstrated to yield a promising result in several image processing and computer vision applications such as
image denoising [19, 25], image restoration [26, 27], etc.
3. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 1, February 2020: 394 - 406
396
In this paper, a new method on HBTC image reconstruction is developed using the sparsity-based
approach. Herein, the impulsive noise levels are reduced by means of coupled dictionaries, in which one
dictionary is created from the HBTC images, while the other is learned from clean images (uncorrupted
images). In the sparse coding stage, the sparse coefficients are firstly estimated from the decoded image
containing high impulsive noise levels. The reconstructed image is then predicted using the clean image
dictionary with the predicted sparse coefficients.
The rest of this paper is organized as follows. Section 2 delivers the VQ-based HBTC image
reconstruction. Section 3 presents the sparsity-based HBTC image reconstruction. Extensive experimental
results are reported at section 4. Finally, the conclusions are drawn at the end of this paper.
2. VECTOR QUANTIZATION-BASED IMAGE RECONSTRUCTION
This section elaborates the patch-based processing for HBTC image reconstruction using vector
quantization (VQ) approach. Herein, the image patch refers to the processed image block. In this method, a
single image patch (HBTC decoded image) is replaced with an image patch obtained from VQ codewords
based on the closest matching rule [11]. In our proposed method, the closest matching is conducted under
the Euclidean distance similarity criterion. The smaller score of Euclidean distance indicates the more similar.
The selected image patch effectively improves the quality of HBTC decoded image. Different image patches
require different processes based on their properties, i.e. “border” or “non-border” type. Figure 2 shows three
possible border processing scenarios/constraints, i.e. horizontal, vertical, and corner borders. As shown in this
figure, suppose that there are four different image blocks denoted as two gray image blocks and two white
image blocks. These image blocks are produced from the HBTC process independently. Each image block is
uncorrelated with the other. The characteristics of one image block is dissimilar with the other block even
though they are adjacent neighbors. Herein, an image patch with border constraint refers to a set of pixels
which lay on two or several different image blocks. The red part of Figure 2 (a) depicts a set of pixels laying
on several image borders. Since these image pixels are in horizontal position, we regard it as an image patch
with a horizontal border. Figures 2 (b) and (c) are examples of image patch in vertical direction and corner
part, respectively. Image patches excluded in these three border constraints are considered as non-border
image patches.
(a) (b) (c)
Figure 2. Types of image patch processing: (a) horizontal, (b) vertical, and (c) corner border
The VQ-based method removes the impulsive noise on the border and non-border cases by means of
a trained visual codebook, as explained below. This method replaces the HBTC decoded image patch with
the visual codebook generated from clean images. The clean images can be simply obtained from natural
images, i.e. some images without HBTC processing. Suppose 𝑇 = { 𝑖 𝑘( 𝑥, 𝑦)} are the clean images which are
used as the training set, where 𝑘 = 1, 2, … , 𝐾. The symbol 𝐾 denotes the number of training image patches.
The VQ clustering iteratively processes this clean image set to produce the representative visual codebook
𝐶 = {𝐶1, 𝐶2, … , 𝐶 𝑁 𝑐
}, containing 𝑁𝑐 codewords.
Let 𝑜( 𝑥, 𝑦) be a halftoned-based BTC decoded image patch. The VQ-based approach firstly identifies
whether the image patches are of border or non-border type. If an image patch is classified as non-border, then
it is processed with the non-border visual codebook. Conversely, when the image patch is classified as a border
patch, then it is processed with either horizontal, vertical, or corner border visual codebook. Figure 3 illustrates
the VQ-based approach for HBTC decoded image reconstruction. Figure 4 displays some examples of
the visual codebook over horizontal, vertical, and corner borders. After border and non-border image patch
determination, this image patch is then processed using VQ-closest matching as defined below:
4. TELKOMNIKA Telecommun Comput El Control
Halftoning-based BTC image reconstruction using patch processing with border constraint (Heri Prasetyo)
397
𝑐∗
= arg min
𝑐=1,2,…,𝑁 𝑐
‖𝑜(𝑥, 𝑦) − 𝐶𝑐
𝜃‖2
2
(1)
where 𝐶𝑐
𝜃
denotes the selected visual codebook, and 𝜃 indicates whether the current processed image patch is
of “border” or “non-border” type. The symbol 𝑐∗
represents the visual codeword index with the lowest
distortion (the most similar) to the image patch 𝑜( 𝑥, 𝑦).
Figure 3. Schematic diagram of VQ-based image reconstruction
(a) (b) (c)
Figure 4. Visual codebook generated from: (a) horizontal, (b) vertical, and (c) corner borders
The selected visual codeword 𝐶𝑐∗
𝜃
replaces the image patch 𝑜( 𝑥, 𝑦) by considering border or
non-border information. The image patch replacement is denoted as:
𝑜̃( 𝑥, 𝑦) = 𝐶𝑐∗
𝜃
(2)
where 𝑜̃( 𝑥, 𝑦) is the restored image patch on position ( 𝑥, 𝑦). Notably, the VQ-reconstruction is still
implemented as a pixel-by-pixel process, not as block-wise process. Consequently, the replacement process is
5. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 1, February 2020: 394 - 406
398
an overdetermined problem. The averaging process over several restored images patches yields a single HBTC
reconstructed image. This process is denoted as:
𝑅̃(𝑥, 𝑦) =
∑ 𝑜̃(𝑥,𝑦)
∑ 𝑅(𝑥,𝑦) 𝑇 𝑅(𝑥,𝑦)
(3)
where 𝑅(𝑥, 𝑦) denotes the operator of image patch processing [18, 19, 26]. This operator denotes the number
of certain pixels employed on the closest matching process and image patch substitution. This operation
indicates the number of currently processed pixels used in the image patch computation. The size of matrix
𝑅(𝑥, 𝑦) is identical to the size of image patch or VQ codewords. The result of 𝑅̃(𝑥, 𝑦) in (3) is intuitively
identical to that of the simple averaging computation in arithmetics. Thus, the image patch averaging
operation is employed at the end of the VQ-based image reconstruction to yield a single value in
the overdetermined system.
3. SPARSITY-BASED IMAGE RECONSTRUCTION
This section presents the proposed HBTC image reconstruction methods using a sparsity-based
approach. Herein, the image patch is firstly extracted from HBTC decoded image. The sparsity-based approach
simply replaces this decoded image patch with the closest match of clean image dictionary. This method also
considers the border constraint on HBTC image reconstruction.
The sparsity-based method utilizes two learned coupled dictionaries. An image patch is firstly
determined and investigated whether it falls into the border or non-border region as already introduced in
the VQ-based processing. Figure 5 displays the HBTC image reconstruction using sparsity-based approach.
Figure 6 gives some example of learned dictionary. Similarly, to the VQ-based post processing approach,
the sparsity-based method utilizes both non-border dictionary and border dictionary. The learned dictionary is
generated from a set of image patches by considering the non-border and border constraints. The border
dictionary can be classified as horizontal, vertical, and corner border. Each dictionary contains HBTC decoded
dictionary and clean image dictionary.
Figure 5. Schematic diagram of sparsity-based image reconstruction
6. TELKOMNIKA Telecommun Comput El Control
Halftoning-based BTC image reconstruction using patch processing with border constraint (Heri Prasetyo)
399
Figure 6. Learned dictionaries generated from horizontal, vertical, and corner border as indicated in
the first to the last row. The left and right columns denote the coupled dictionaries generated from
halftoned-BTC and clean image patch, respectively
Let 𝑇 = { 𝑖 𝑘( 𝑥, 𝑦), 𝑜 𝑘( 𝑥, 𝑦)} be a training set. This set contains 𝐾 clean image patches 𝑖 𝑘( 𝑥, 𝑦) and
their corresponding HBTC decoded image patches 𝑜 𝑘( 𝑥, 𝑦). The following optimization procedure learns two
dictionaries, i.e. clean image dictionary (𝐷 𝑐
) and HBTC image dictionary (𝐷ℎ
), from training set 𝑇. To obtain
the sparse coefficients 𝛼, please refer [19] for further detailed explanation of optimization process. This process
is denoted as:
min
𝐷 𝑐,𝐷ℎ,𝛼
{‖𝑖 − 𝐷 𝑐
𝛼‖2
2
+ ‖𝑜 − 𝐷ℎ
𝛼‖2
2} + 𝜆‖𝛼‖1 (4)
where 𝛼 denotes sparse coefficients and 𝜆 is sparse regularization term. Since two dictionaries 𝐷 𝑐
and 𝐷ℎ
share
the same spare coefficient 𝛼, the optimization in (4) can also be performed as:
min
𝐷,𝛼
{‖ 𝑌 − 𝐷𝛼‖2
2} + 𝜆‖𝛼‖1 (5)
where 𝐷 = [𝐷 𝑐
; 𝐷ℎ
] and 𝑌[ 𝑖 𝑇
, 𝑜 𝑇] 𝑇
denote the concatenated dictionary and concatenated image patch,
respectively. The dictionary 𝐷 contains the clean and HBTC decoded component dictionaries. The matrix 𝑌
consists of the clean and HBTC decoded image patches. The KSVD [19] or the other dictionary learning
algorithms can be exploited to effectively solve this optimization problem in (5). After deciding the border or
non-border region, the next step is to determine the sparse coefficient of the HBTC image patch 𝑜( 𝑥, 𝑦) with
a sparse coding step. The sparse coefficient 𝑜( 𝑥, 𝑦) can be predicted with a help of 𝐷 𝜃ℎ
as follow:
7. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 1, February 2020: 394 - 406
400
min
𝛼∗
{‖𝑜(𝑥, 𝑦) − 𝐷 𝜃ℎ
𝛼‖2
2
} + 𝜆‖𝛼‖1 (6)
where 𝐷 𝜃ℎ
denotes the HBTC decoded image under border or non-border region 𝜃. The symbol 𝛼∗
represents
the predicted sparse coefficient. Several specific algorithms, such as matching pursuit (MP) [21], orthogonal
matching pursuit (OMP) [22], Basis Pursuit (BP) [23], maximum a posteriori (MAP) [24], or others, can be
exploited to predict 𝛼∗
. By means of clean image dictionary 𝐷 𝜃𝑐
with border or non-border region 𝜃, the HBTC
decoded image can be replaced and aligned as follow:
min
𝑅̃(𝑥,𝑦)
{‖𝑅(𝑥, 𝑦) 𝑅̃( 𝑥, 𝑦) − 𝐷 𝜃𝑐
𝛼∗‖2
2
} (7)
where 𝑅̃( 𝑥, 𝑦) denotes the HBTC reconstructed image. Similarly, to the VQ-based post processing approach,
𝑅̃( 𝑥, 𝑦) is an overdetermined system. Thus, the 𝑅̃( 𝑥, 𝑦) can be solved using the ordinary least squared method
(averaging process) as follow:
𝑅̃( 𝑥, 𝑦) = {∑ 𝑅( 𝑥, 𝑦) 𝑇
𝑅( 𝑥, 𝑦)}−1 ∑ 𝑅( 𝑥, 𝑦) 𝑇
𝐷 𝜃𝑐
𝛼∗
(8)
The image patch is extracted using 𝑅( 𝑥, 𝑦) operator. It can be expected that the HBTC image
reconstruction will yield better image quality since it utilizes the clean image dictionary. In addition,
the centralized sparse representation [26] can also be embedded into the sparse representation module to further
improve the quality of the HBTC decoded image. It is based on the observation that the original image (without
HBTC compression) and the HBTC decoded image, 𝛼 𝑥 − 𝛼 𝑦, have high probability to fit the Laplace
distribution. The prospective readers are suggested to refer to [26] for the full description of centralized sparse
representation. Figure 7 depicts the distribution of sparse coding between the original image and HBTC
decoded image. Thus, incorporating the Laplacian prior knowledge on the sparse representation framework
may produce better quality on the HBTC reconstructed image. The centralized sparse representation [26] can
be deployed into the proposed method by considering a border or non-border region.
(a) (b)
Figure 7. The distribution of sparse coding from HBTC reconstructed image obtained from:
(a) ODBTC decoded image, and (b) EDBTC decoded image
4. EXPERIMENTAL RESULTS
This section demonstrates the effectiveness and usefulness of the proposed method. Figure 8 shows
the training and testing sets utilized for the experiment. The training set consists of sixteen grayscale images,
whereas the testing set contains twenty grayscale images. The training and testing images are with various
image conditions such as high-frequency, low-frequency, dark and light brightness, etc. The proposed method
also employs a set of training image sets as similarly used in [13] to generate several visual codebooks and
dictionaries. The training image set consists of twenty grayscale images over various conditions such as various
illumination conditions, different lighting, frequency and activity, etc. Each training image is of size
-0.06 -0.04 -0.02 0 0.02 0.04 0.06
0
20
40
60
80
100
120
140
-0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2
0
20
40
60
80
100
120
140
8. TELKOMNIKA Telecommun Comput El Control
Halftoning-based BTC image reconstruction using patch processing with border constraint (Heri Prasetyo)
401
512 × 512. We extract image patches from all training images using overlapping strategy. Herein,
the overlapping strategy refers to the process in which the current processed image patch is moved to the new
position under one pixel different. In this occasion, one image pixel may be processed several times since
several image patches employ this pixel.
In a result, we obtain more than five million image patches as training set for codewords generation
and codebooks learning in the VQ and sparsity-based approach, respectively. This amount of dataset is
sufficient to satisfy the variability aspect of image patch in the training process. Thus, the proposed method
can be directly applied to a general case even though the training data is not from the same or specific dataset.
The peak-signal-noise-ratio (PSNR) score evaluates the performance of proposed method and former schemes
objectively. The PSNR is formulated as:
𝑃𝑆𝑁𝑅 = 10log10
2552
1
𝑀𝑁
∑ ∑ [𝐼(𝑥,𝑦)−𝑅̃(𝑥,𝑦)]2𝑁
𝑥=1
𝑀
𝑦=1
(9)
where 𝑅̃(𝑥, 𝑦) and 𝐼(𝑥, 𝑦) denote the HBTC decoded image and original image, respectively. Higher value of
PSNR indicates higher similarity between two images, making it more preferable for human vision. Thorough
experiment, the image blocks are set as 8 × 8 and 16 × 16 for both ODBTC and EDBTC methods. The image
quality improvement after post processing is considered based on the increasing PSNR value obtained after
applying the post-processing step to the HBTC decoded image against the case of not applying that step.
(a) (b)
Figure 8. A set of images used in this experiment as: (a) training set, and (b) testing set
4.1. Effectiveness of the proposed method
This subsection reports the experimental results on the HBTC decoded image reconstruction.
The performance of the proposed method is visually judged based on the image quality of the post processing
results. The proposed method yields better reconstructed image quality compared to the original HBTC
decoded images. Herein, a single image is firstly encoded using either the ODBTC or the EDBTC method to
yield two color quantizers and a bitmap image. These two methods simply set the image block size as
8 × 8. Then, the decoding process is further applied to the two color quantizers and the bitmap image to obtain
the ODBTC or EDBTC decoded image. The post processing is conducted on the ODBTC or EDBTC decoded
image to further examine the performance of the proposed reconstruction method in terms of specific imaging
tasks such image compression, retrieval and classification, etc.
Firstly, we compare the VQ and sparsity-based approaches with other schemes such as lowpass
filtering and variance-classified filtering. The lowpass filtering approach employs a Gaussian kernel of size
11 × 11 with 𝜇 = 0 and 𝜎 = 1 to suppress the halftoning impulsive noise [13]. On the other hand,
the variance-classified filtering approach uses 13 optimized kernels of size 7 × 7 [13]. The VQ-based technique
exploits the 1024 optimal visual codebooks, while the sparsity-based method utilizes 1024 dictionary atoms.
The sparsity-based image reconstruction employs two learned dictionaries. In this approach, the first dictionary
is for suppressing the impulsive noise while the other dictionary for reducing the noise occurred in the HBTC
9. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 1, February 2020: 394 - 406
402
image border. The second dictionary considers the condition of HBTC image border such as mixing horizontal,
vertical, and corner borders. For each image border, a coupled dictionary will be produced containing
the HBTC decoded and clean image components. The sparse coefficient is predicted from the ODBTC or
EDBTC decoded image by means of HBTC dictionary, while the post processing image is composed and
aligned using the clean image dictionary with predicted sparse coefficient. The image patch is initially
classified as of “border” or “non-border” type.
Figure 9 depicts the image quality comparison after applying the post processing methods on ODBTC
decoded image. As shown in this figure, the VQ-based technique produces better image quality in comparison
with the lowpass filtering approach. Some impulsive noises are successfully reduced by applying lowpass
filtering, but the resolution of the reconstructed image is deteriorated due to the blurring effect of
the lowpass filters. The sparsity-based method offers the best ODBTC reconstructed image compared to
the other schemes as demonstrated in Figure 9. The post processing technique for EDBTC decoded image is
reported in Figure 10. Similarly, to the ODBTC case, the sparsity-based method for EDBTC decoded images
outperforms the lowpass filtering and VQ-based post processing. Thus, the sparsity-based method can be
regarded as a satisfactory post processing technique for improving the HBTC decoded image quality. Using
the proposed method, the HBTC is expected to achieve low computational complexity in the HBTC encoding
stage and, at the same time, produce satisfactory decoded image at the HBTC decoding side.
(a) (b)
(c) (d)
Figure 9. Effectiveness of the proposed method in OBTC image reconstruction using: (a) lowpass
filtered technique, (b) VQ-based image reconstruction, (c) sparsity-based image reconstruction using
two learned dictionaries, and (d) the original ODBTC decoded image with image block size 8 × 8
10. TELKOMNIKA Telecommun Comput El Control
Halftoning-based BTC image reconstruction using patch processing with border constraint (Heri Prasetyo)
403
(a) (b)
(c) (d)
Figure 10. Effectiveness of the proposed method in EBTC image reconstruction using: (a) lowpass
filtered technique, (b) VQ-based image reconstruction, (c) sparsity-based image reconstruction using
two learned dictionaries, and (d) the original EDBTC decoded image with image block size 8 × 8
4.2. Proposed method performance using two dictionaries
In this experiment, the performance of the proposed method and other schemes is objectively assessed
in terms of PSNR score. The image block sizes are set as 8 × 8 and 16 × 16 for both ODBTC and EDBTC
techniques. The sparsity-based method simply exploits two dictionaries, i.e. the impulsive noise dictionary and
the image border dictionary. Each dictionary consists of a couple dictionaries (HBTC image dictionary and
clean image dictionary). The experimental setting for low-pass and variance-classified filtering remains
identical to that of section 4.1.
Table 1 presents the performance of the proposed sparsity-based method on HBTC image
reconstruction against low-pass and variance-classified filtering, VQ-based post-processing and absence of
post-processing. In this table, the compared value (denoted as PSNR score) is the average value of PSNR over
all testing images. The variance classified filtering technique offers better image reconstruction compared to
that of the lowpass filtering. Whereas the VQ-based approach is superior compared to filtering methods
(lowpass filtering and variance-classified filtering). As shown from this table, the sparsity-based method yields
the best performance for ODBTC and EDBTC decoded image over image block size 8 × 8 and 16 × 16. Thus,
the sparsity-based method with two learned dictionaries is suitable to improve the image quality of HBTC
decoded image.
11. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 1, February 2020: 394 - 406
404
Table 1. Performance of the proposed method with two learned dictionaries in terms of PSNR score, one
dictionary is from noisy image patch, and the other dictionary is from image patch extracted on the border
Method
ODBTC EDBTC
8 × 8 16 × 16 8 × 8 16 × 16
Without Post Processing 19.23 16.10 20.14 16.54
Lowpass Filtered [13] 29.14 28.46 30.76 30.11
Variance Classified [13] 29.48 28.63 31.63 30.82
VQ-Based 33.96 33.89 34.68 34.37
Sparsity-Based 34.77 34.63 34.85 34.80
4.3. Proposed method performance using several dictionaries
This experiment examines the usefulness of the proposed sparsity-based post processing method under
four learned dictionaries. In this experiment, the sparsity-based method generates four learner dictionaries, in
which all dictionaries are for suppressing impulsive noise, and for dealing with horizontal, vertical, and corner
border. Each dictionary is comprised of two dictionaries coupled to each other, namely the HBTC image
dictionary and the clean image dictionary. The performance is simply investigated in terms of average PSNR
score over all testing images. Table 2 shows the performance comparison between the proposed method using
four learned dictionaries and former existing schemes. Herein, the image block size is set at 8 × 8 and 16 × 16
for both ODBTC and EDBTC techniques. As it can be seen from this table, the sparsity-based method produces
the best average PSNR value compared to the other schemes. The sparsity-based method with four learned
dictionaries offers better reconstructed image compared to the sparsity-based scheme with two learned
dictionaries. The sparsity-based technique is suitable for the HBTC decoded image reconstruction task.
Table 2. Performance of the proposed method with four learned dictionaries, one dictionary is from noisy
image patch, and the other dictionaries are from image patch extracted on
the horizontal, vertical, and corner border
Method
ODBTC EDBTC
8 × 8 16 × 16 8 × 8 16 × 16
Without Post Processing 19.23 16.10 20.14 16.54
Lowpass Filtered [13] 29.14 28.46 30.76 30.11
Variance Classified [13] 29.48 28.63 31.63 30.82
VQ-Based 34.01 33.92 34.74 34.45
Sparsity-Based 34.98 34.71 34.91 34.83
4.4. Proposed method performance incorporating laplacian prior knowledge
In this experiment, the sparsity-based method injects the Laplacian prior knowledge in order to
generate four learned dictionaries. Herein, the sparse coding noise is assumed to obey the Laplace
distribution [26]. Thus, the centralized sparse representation is more suitable for learning the dictionaries by a
given set of training images. The performance is measured in terms of average PNSR value over all testing
images. Table 3 reports the performance comparisons, where the proposed method considers the Laplacian
prior to generate the learned dictionaries and to perform the closest matching between the HBTC decoded
image patch with the learned dictionaries. The proposed method with Laplacian prior outperforms the former
existing schemes as reported in Table 3. The proposed sparsity-based method with Laplacian prior also offers
better average PNSR value compared to that of the sparsity-based without Laplacian prior. Thus,
the sparsity-based method with Laplacian prior is the best candidate for the HBTC image reconstruction.
Table 3. Performance of the proposed method by incorporating laplacian prior on sparse coding stage
Method
ODBTC EDBTC
8 × 8 16 × 16 8 × 8 16 × 16
Without Post Processing 19.23 16.10 20.14 16.54
Lowpass Filtered [13] 29.14 28.46 30.76 30.11
Variance Classified [13] 29.48 28.63 31.63 30.82
VQ-Based 34.01 33.92 34.74 34.45
Sparsity-Based 35.00 34.82 35.03 34.97
4.5. Computational time comparison
This subsection delivers the performance comparisons in terms of computational time. In this
experiment, a single image reconstruction is conducted to measure computational burden in terms of CPU time.
12. TELKOMNIKA Telecommun Comput El Control
Halftoning-based BTC image reconstruction using patch processing with border constraint (Heri Prasetyo)
405
The execution time is measured in units of seconds. To make a fair comparison, the computing environments
are identically selected over all post-processing methods. Herein, the experiments are performed under
Personal Computer with Intel Core 2 Quad CPU @2.40 GHz and 4.0 GB RAM memory. All image
reconstruction methods are developed and run in the Matlab R2010b.
Table 4 summarizes the computation time comparisons. Our results suggest that the lowpass filtering
approach needs the lowest computation time since it simply performs filtering in blind strategy.
The lowpass filtering ignores the image characteristic and noise statistics therefore requiring the lowest effort
in the filtering process. The variance classified filtering requires higher computational time compared to that
of lowpass filtering since it needs to investigate an underlying image statistic and characteristic. The variance
classified filtering computes the statistical metric of the variance to determine a suitable filter coefficient
thereby requiring relatively higher computational time. The VQ-based approach requires longer computational
time compared to the low-pass and variance-classified filtering in the HBTC image reconstruction.
The VQ-based method employs the closest matching and image patch substitution in the image reconstruction
stage. In addition, the VQ-based approach also performs the image patch averaging at the end of
the reconstruction process. On the other hand, sparsity-based method requires the highest computational time
to reconstruct the HBTC decoded image. This method estimates the sparse coefficient and replaces the image
patch based on the predicted sparse coefficient. The sparsity-based approach also requires an additional
overhead computing cost on the least-square calculation at the end of the image reconstruction procedure. Even
though the sparsity-based approach is associated with the highest computational time, it produces the best
reconstructed image for the HBTC decoded image. This effect can be considered as a trade-off between
the quality of the HBTC decoded image and the processing time.
Table 4. Computational time comparison (in seconds) between the proposed method
and former scheme in the halftoning-based BTC image reconstruction
Method
ODBTC EDBTC
8 × 8 16 × 16 8 × 8 16 × 16
Lowpass Filtered [13] 3.161 3.162 3.163 3.162
Variance Classified [13] 3.412 3.405 3.45 3.466
VQ-Based 20.803 23.99 21.536 23.89
Sparsity-Based 35.12 35.88 35.44 35.12
5. CONCLUSION
A new method has been presented for improving HBTC image quality under sparse representation
framework. The HBTC image often contains impulsive noise which may distort human vision perception of
this decoded images. It induces unpleasant condition for human visual perception. To further alleviate
the impulsive noise, the HBTC decoded image is then predicted, aligned, and modified using two learned
dictionaries, i.e. HBTC and clean image dictionaries. In the proposed method, the sparse coefficient is simply
estimated from the HBTC decoded image by means of HBTC image dictionary. Subsequently,
the reconstructed image is composed from the clean image dictionary with the predicted sparse coefficient.
The HBTC image patch is initially classified as border or non-border image patch before applying the sparse
representation. The experiment finding suggests that the proposed method yields a promising performance
compared to former existing schemes. As documented in the experimental results, the proposed method can
provide superior results compared to the former related schemes. To further reduce the computational time,
several techniques such as the fast codewords matching, simple sparse coding calculation, etc., can be exploited
for the proposed method. The Euclidean distance computation can be replaced with linear time closest matching
technique. The sparse coding and estimation techniques can be replaced with the recent advance technique on
sparse representation technique for the proposed sparsity-based method. To reduce computational time,
the proposed method can be implemented in the parallel computation framework, in which different image
blocks are processed independently.
REFERENCES
[1] Guo J. M. and M. F. Wu, "Improved block truncation coding based on the void-and-cluster dithering approach,"
IEEE Transactions on image processing, vol. 18, no. 1, pp. 211-213, 2009.
[2] Guo J. M., "Improved block truncation coding using modified error diffusion," Electronics Letters, vol. 44, no. 7,
pp. 462-464, 2008.
[3] Liu X., et al., "Joint Data Hiding and Compression Scheme Based on Modified BTC and Image Inpainting," IEEE
Access, vol. 7, pp. 116027-116037, 2019.
13. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 1, February 2020: 394 - 406
406
[4] Chen Y. Y. and K. Y. Chi, "Cloud image watermarking: high quality data hiding and blind decoding scheme based
on block truncation coding," Multimedia Systems, vol. 25, no. 5, pp. 551-563, 2019.
[5] Singh D., and S. K. Singh, "Block truncation coding based effective watermarking scheme for image authentication
with recovery capability," Multimedia Tools Applications, vol. 78, pp. 4197-4215, 2017.
[6] Huynh N. T., et al., "Minima-maxima preserving data hiding algorithm for absolute moment block truncation coding
compressed images," Multimedia Tools Applications, vol. 77, no. 5, pp. 5767-5783, 2018.
[7] Guo J. M. and H. Prasetyo, "Content-based image retrieval using features extracted from halftoning-based block
truncation coding," IEEE Transactions on image processing, vol. 24, no. 3, pp. 1010-1024, 2015.
[8] Guo J. M., H. Prasetyo, and H. S. Su, "Image indexing using the color and bit pattern feature fusion," Journal of
Visual Communication Image Representation, vol. 24, no. 8, pp. 1360-1379, 2013.
[9] Guo J. M., et al., "Image retrieval using indexed histogram of void-and-cluster block truncation coding," Signal
Processing, vol. 123, pp. 143-156, 2016.
[10] Guo J. M., H. Prasetyo, and J. H. Chen, "Content-based image retrieval using error diffusion block truncation coding
features," IEEE Transactions on Circuits Systems for Video Technology, vol. 25, no. 3, pp. 466-481, 2015.
[11] Guo J. M., H. Prasetyo, and K. Wong, "Halftoning-based block truncation coding image restoration," Journal of
Visual Communication Image Representation, vol. 35, pp. 193-197, 2016.
[12] Prasetyo H. and H. Kurniawan, "Reducing JPEG False Contour Using Visual Illumination," Information, vol. 9,
no. 2, pp. 41, 2018.
[13] Guo J -M., C. C. Su, and H. J. Kim, "Blocking effect and halftoning impulse suppression in an ED/OD BTC image
with optimized texture-dependent filter sets," Proceedings 2011 International Conference on System Science and
Engineering, 2011.
[14] Mustaffa M. R., et al., "A colour-based building recognition using support vector machine," TELKOMNIKA
Telecommunication Computing Electronics and Control, vol. 17, no. 1, pp. 473-480, 2019.
[15] Al-Nima R. R. O., M. Y. Al-Ridha, and F. H. Abdulraheem, "Regenerating face images from multi-spectral palm
images using multiple fusion methods," TELKOMNIKA Telecommunication Computing Electronics and Control,
vol. 17, no. 6, pp. 3110-3119, 2019.
[16] Memon M. H., et al., "Blood image analysis to detect malaria using filtering image edges and classification,"
TELKOMNIKA Telecommunication Computing Electronics and Control, vol. 17, no. 1, pp. 194-201, 2019.
[17] Attamimi M., T. Nagai, and D. Purwanto, "Object detection based on particle filter and integration of multiple
features," Procedia computer science, vol. 144, pp. 214-218, 2018.
[18] Zhang Z., et al., "A survey of sparse representation: algorithms and applications," IEEE access, vol. 3, pp. 490-530,
2015.
[19] Aharon M., M. Elad, and A. Bruckstein, "K-SVD: An algorithm for designing overcomplete dictionaries for sparse
representation," IEEE Transactions on signal processing, vol. 54, no. 11, pp. 4311-4322, 2006.
[20] Etienam C., R. V. Velasquez, and O. Dorn, "Sparse Multiple Data Assimilation with K-SVD for the History Matching
of Reservoirs," in Progress in Industrial Mathematics at ECMI Conference paper, pp. 567-573, 2019.
[21] Mallat S. G. and Z. Zhang, "Matching pursuits with time-frequency dictionaries," IEEE Transactions on signal
processing, vol. 41, no. 12, pp. 3397-3415, 1993.
[22] Pati Y. C., R. Rezaiifar, and P. S. Krishnaprasad, "Orthogonal matching pursuit: Recursive function approximation
with applications to wavelet decomposition," Proceedings of 27th
Asilomar conference on signals, systems and
computers, IEEE, 1993.
[23] Chen S. S., D. L. Donoho, and M. A. Saunders, "Atomic decomposition by basis pursuit," SIAM review, vol. 43,
no. 1, pp. 129-159, 2001.
[24] Lewicki M. S. and T. J. Sejnowski, "Learning overcomplete representations," Neural computation, vol. 12, no. 2,
pp. 337-365, 2000.
[25] Golla M., and S. Rudra, "A Novel Approach of K-SVD-Based Algorithm for Image Denoising," Histopathological
Image Analysis in Medical Decision Making, pp. 154-180, 2019.
[26] Dong W., et al., "Nonlocally centralized sparse representation for image restoration," IEEE Transactions on Image
Processing, vol. 22, no. 4, pp. 1620-1630, 2013.
[27] Hu Y. C., K. K. R. Choo, and W. L. Chen, "Tamper detection and image recovery for BTC-compressed images,"
Multimedia Tools and applications, vol. 76, no. 14, pp. 15435-15463, 2017.