Performing the image compression over the resource constrained hardware is quite a challenging task. Although, there has been various approaches being carried out towards image compression considering the hardware aspect of it, but still there are problems associated with the memory acceleration associated with the entire operation that downgrade the performance of the hardware device. Therefore, the proposed approach presents a cost effective image compression mechanism which offers lossless compression using a unique combination of the non-linear filtering, segmentation, contour detection, followed by the optimization. The compression mechanism adapts analytical approach for significant image compression. The execution of the compression mechanism yields faster response time, reduced mean square error, improved signal quality and significant compression ratio performance.
Dataset Pre-Processing and Artificial Augmentation, Network Architecture and ...INFOGAIN PUBLICATION
Training a Convolutional Neural Network (CNN) based classifier is dependent on a large number of factors. These factors involve tasks such as aggregation of apt dataset, arriving at a suitable CNN network, processing of the dataset, and selecting the training parameters to arrive at the desired classification results. This review includes pre-processing techniques and dataset augmentation techniques used in various CNN based classification researches. In many classification problems, it is usually observed that the quality of dataset is responsible for proper training of CNN network, and this quality is judged on the basis of variations in data for every class. It is not usual to find such a pre-made dataset due to many natural concerns. Also it is recommended to have a large dataset, which is again not usually made available directly as a dataset. In some cases, the noise present in the dataset may not prove useful for training, while in others, researchers prefer to add noise to certain images to make the network less vulnerable to unwanted variations. Hence, researchers use artificial digital imaging techniques to derive variations in the dataset and clear or add noise. Thus, the presented paper accumulates state-of-the-art works that used the pre-processing and artificial augmentation of dataset before training. The next part to data augmentation is training, which includes proper selection of several parameters and a suitable CNN architecture. This paper also includes such network characteristics, dataset characteristics and training methodologies used in biomedical imaging, vision modules of autonomous driverless cars, and a few general vision based applications.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
Image processing is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within electronics engineering and computer science disciplines too. Image Processing is a technique to enhance raw images received from satellites, space probes, aircrafts, military reconnaissance flights or pictures taken in normal day-to-day life from normal cameras. The field is becoming powerful and popular because of technically powerful personal computers, large memories of available devices as well as graphic softwares and tools available with that devices and gadgets. Image acquisition, pre-processing, segmentation, representation, recognition and interpretation are the different basic steps through which image processing is carried out. [3][4].
Lossless Image Compression Using Data Folding Followed By Arithmetic Codingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
A Novel Mechanism for Low Bit-Rate CompressionIOSR Journals
This document describes a novel mechanism for low bit-rate image compression. It proposes an adaptive downsampling and upconversion technique that uses directional low-pass pre-filtering for downsampling images. The decoder then decompresses and upconverts the low-resolution images back to their original resolution. Experiments show this technique outperforms JPEG 2000, providing higher visual quality at lower bit rates. The proposed system exhibits superior compression performance compared to JPEG, especially at low bit rates, as shown by its higher PSNR values in tests on various images.
IRJET- Handwritten Decimal Image Compression using Deep Stacked AutoencoderIRJET Journal
This document proposes using a deep stacked autoencoder neural network for compressing handwritten decimal image data. It involves training multiple autoencoders in sequence to form a deep network that can compress the high-dimensional input images into lower-dimensional encoded representations while minimizing information loss. The autoencoders are trained one layer at a time using scaled conjugate gradient descent. Testing on the MNIST handwritten digits dataset showed the deep stacked autoencoder achieved compression by encoding the 400-dimensional input images down to a 25-dimensional representation while maintaining good reconstruction accuracy, as measured by minimizing the mean squared error at each layer.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
Dataset Pre-Processing and Artificial Augmentation, Network Architecture and ...INFOGAIN PUBLICATION
Training a Convolutional Neural Network (CNN) based classifier is dependent on a large number of factors. These factors involve tasks such as aggregation of apt dataset, arriving at a suitable CNN network, processing of the dataset, and selecting the training parameters to arrive at the desired classification results. This review includes pre-processing techniques and dataset augmentation techniques used in various CNN based classification researches. In many classification problems, it is usually observed that the quality of dataset is responsible for proper training of CNN network, and this quality is judged on the basis of variations in data for every class. It is not usual to find such a pre-made dataset due to many natural concerns. Also it is recommended to have a large dataset, which is again not usually made available directly as a dataset. In some cases, the noise present in the dataset may not prove useful for training, while in others, researchers prefer to add noise to certain images to make the network less vulnerable to unwanted variations. Hence, researchers use artificial digital imaging techniques to derive variations in the dataset and clear or add noise. Thus, the presented paper accumulates state-of-the-art works that used the pre-processing and artificial augmentation of dataset before training. The next part to data augmentation is training, which includes proper selection of several parameters and a suitable CNN architecture. This paper also includes such network characteristics, dataset characteristics and training methodologies used in biomedical imaging, vision modules of autonomous driverless cars, and a few general vision based applications.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
Image processing is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within electronics engineering and computer science disciplines too. Image Processing is a technique to enhance raw images received from satellites, space probes, aircrafts, military reconnaissance flights or pictures taken in normal day-to-day life from normal cameras. The field is becoming powerful and popular because of technically powerful personal computers, large memories of available devices as well as graphic softwares and tools available with that devices and gadgets. Image acquisition, pre-processing, segmentation, representation, recognition and interpretation are the different basic steps through which image processing is carried out. [3][4].
Lossless Image Compression Using Data Folding Followed By Arithmetic Codingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
A Novel Mechanism for Low Bit-Rate CompressionIOSR Journals
This document describes a novel mechanism for low bit-rate image compression. It proposes an adaptive downsampling and upconversion technique that uses directional low-pass pre-filtering for downsampling images. The decoder then decompresses and upconverts the low-resolution images back to their original resolution. Experiments show this technique outperforms JPEG 2000, providing higher visual quality at lower bit rates. The proposed system exhibits superior compression performance compared to JPEG, especially at low bit rates, as shown by its higher PSNR values in tests on various images.
IRJET- Handwritten Decimal Image Compression using Deep Stacked AutoencoderIRJET Journal
This document proposes using a deep stacked autoencoder neural network for compressing handwritten decimal image data. It involves training multiple autoencoders in sequence to form a deep network that can compress the high-dimensional input images into lower-dimensional encoded representations while minimizing information loss. The autoencoders are trained one layer at a time using scaled conjugate gradient descent. Testing on the MNIST handwritten digits dataset showed the deep stacked autoencoder achieved compression by encoding the 400-dimensional input images down to a 25-dimensional representation while maintaining good reconstruction accuracy, as measured by minimizing the mean squared error at each layer.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
Compressed Medical Image Transfer in Frequency DomainCSCJournals
A common approach to the medical image compression algorithm begins by separating the region of interests from the background of the medical images and then lossless and lossy compression schemes are applied on the ROI part and background respectively. The compressed files (ROI and background) are now transmitted through different media of communications (local host, Intranet and Internet) between the server and clients. In this work, a medical image transfer coding scheme based on lossless Haar wavelet transforms method is proposed. At first, the proposed scheme is tested on Intranet (for both RoI and background) in order to compare its results with Internet tests. An adaptive quantization algorithm is used to apply on quasi lossless ROI wavelet coefficients while a uniform quantization is used to apply on lossy background wavelet coefficients. Finally, the retained quantization indices are entropy encoded with an optimal variable coding algorithm. The test results have indicated that the performance of the proposed MITC via Intranet is much better than via Internet in terms of transferring time, while the quality of the reconstructed medical image remains constant despite the medium of communication. For best adopted parameters, a compressed medical image file (760 KB „³ 19.38 KB) is transmitted through Internet (bandwidth= 1024 kbps) with transfer time = 0.156 s while the uncompressed file is sent with transfer time = 6.192 s.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
This document presents a digital image watermarking technique using Hadamard transform. The key steps are:
1. The image is divided into 8x8 blocks and Hadamard transform applied to get coefficients. Breadth-first search is used to select efficient embedding points among the positive coefficients.
2. Prime numbered blocks are avoided for embedding. Two points in the first column are selected for embedding the watermark.
3. Cox's equation is used to embed the watermark in the host image coefficients. Inverse Hadamard transform generates the watermarked image.
4. To extract the watermark, the process is reversed using the visited matrix of points. Experiments show the technique provides better image
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Computation Complexity Reduction of 2-D Gaussian FilterIRJET Journal
This document discusses reducing the computational complexity of a 2D Gaussian filter for image smoothing. It begins with an abstract that notes 2D Gaussian filters are commonly used for image smoothing but require heavy computational resources. It then proposes using fixed-point arithmetic rather than floating point to implement the filter on an FPGA, which can increase efficiency and decrease area and complexity. The document is divided into sections that cover the theory behind image filtering, image smoothing and sharpening, quality metrics for evaluation, and an energy scalable Gaussian smoothing filter architecture. It concludes by discussing results and benefits of implementing the filter using fixed-point arithmetic on an FPGA.
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
This paper presents a new technique able to provide a very good compression ratio in preserving the quality of the important components of the image called main objects. It focuses on applications where the image is of large size and consists of an object or a set of objects on background such as identity photos. In these applications, the background of the objects is in general uniform and represents insignificant information for the application. The results of this new techniques show that is able to achieve an average compression ratio of 29% without any degradation of the quality of objects detected in the images. These results are better than the results obtained by the lossless techniques such as JPEG and TIF techniques.
This document compares the performance of three lossless image compression techniques: Run Length Encoding (RLE), Delta encoding, and Huffman encoding. It tests these algorithms on binary, grayscale, and RGB images to evaluate compression ratio, storage savings percentage, and compression time. The results found that Delta encoding achieved the highest compression ratio and storage savings, while Huffman encoding had the fastest compression time. In general, the document evaluates and compares the performance of different lossless image compression algorithms.
A hybrid predictive technique for lossless image compressionjournalBEEI
Compression of images is of great interest in applications where efficiency with respect to data storage or transmission bandwidth is sought.The rapid growth of social media and digital networks have given rise to huge amount of image data being accessed and exchanged daily. However, the larger the image size, the longer it takes to transmit and archive. In other words, high quality images require huge amount of transmission bandwidth and storage space. Suitable image compression can help in reducing the image size and improving transmission speed. Lossless image compression is especially crucial in fields such as remote sensing healthcare network, security and military applications as the quality of images needs to be maintained to avoid any errors during analysis or diagnosis. In this paper, a hybrid prediction lossless image compression algorithm is proposed to address these issues. The algorithm is achieved by combining predictive Differential Pulse Code Modulation (DPCM) and Integer Wavelet Transform (IWT). Entropy and compression ratio calculation are used to analyze the performance of the designed coding. The analysis shows that the best hybrid predictive algorithm is the sequence of DPCM-IWT-Huffman which has bits sizes reduced by 36%, 48%, 34% and 13% for tested images of Lena, Cameraman, Pepper and Baboon, respectively.
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
The document discusses image enhancement techniques for use in multiprocessor systems-on-chip (MPSoC). It reviews existing image enhancement methods implemented on FPGA and identifies limitations like low accuracy. The paper proposes using advanced bus architectures like AMBA/AXI in MPSoC to improve communication between memory and input medical images for enhancement, reducing heterogeneity effects. It summarizes various image enhancement algorithms that could be implemented on reconfigurable FPGA hardware for use in MPSoC, including brightness control, contrast stretching, histogram equalization, and edge detection techniques.
IRJET- Efficient JPEG Reconstruction using Bayesian MAP and BFMTIRJET Journal
This document discusses efficient JPEG reconstruction using Bayesian MAP and BFMT. It proposes using a Bayesian maximum a posteriori probability approach with an alternating direction method of multipliers iterative optimization algorithm. Specifically, it uses a learned frame prior and models the quantization noise as Gaussian. It also proposes using bilateral filter and its method noise thresholding using wavelets for image denoising as part of the JPEG reconstruction process. Experimental results show this approach improves reconstruction quality both visually and in terms of signal-to-noise ratio compared to other existing methods.
An Efficient Design Approach of ROI Based DWT Using Vedic and Wallace Tree Mu...IJECEIAES
In digital image processing, the compression mechanism is utilized to enhance the visual perception and storage cost. By using hardware architectures, reconstruction of medical images especially Region of interest (ROI) part using Lossy image compression is a challenging task. In this paper, the ROI Based Discrete wavelet transformation (DWT) using separate Wallace- tree multiplier (WM) and modified Vedic Multiplier (VM) methods are designed. The Lifting based DWT method is used for the ROI compression and reconstruction. The 9/7 filter coefficients are multiplied in DWT using Wallace- tree multiplier (WM) and modified Vedic Multiplier (VM). The designed Wallace tree multiplier works with the parallel mechanism using pipeline architecture results with optimized hardware resources, and 8x8 Vedic multiplier designs improves the ROI reconstruction image quality and fast computation. To evaluate the performance metrics between ROI Based DWT-WM and DWT-VM on FPGA platform, The PSNR and MSE are calculated for different Brain MRI images, and also hardware constraints include Area, Delay, maximum operating frequency and power results are tabulated. The proposed model is designed using Xilinx platform using Verilog-HDL and simulated using ModelSim and Implemented on Artix-7 FPGA device.
This survey propose a Novel Joint Data-Hiding and
Compression Scheme (JDHC) for digital images using side match
vector quantization (SMVQ) and image in painting. In this
JDHC scheme image compression and data hiding scheme are
combined into a single module. On the client side, the data should
be hided and compressed in sub codebook such that remaining
block except left and top most of the image. The data hiding and
compression scheme follows raster scanning order i.e. block by
block on row basis. Vector Quantization used with SMVQ and
Image In painting for complex block to control distortion and
error injection. The receiver side process is based on two
methods. First method divide the received image into series of
blocks the receiver achieve hided data and original image
according to the index value in the segmented block. Second
method use edge based harmonic in painting is used to get
original image if any loss in the image.
Artificial Neural Network (ANN) is a fast-growing method which has been used in different
industries during recent years. The main idea for creating ANN which is a subset of artificial
intelligence is to provide a simple model of human brain in order to solve complex scientific and
industrial problems. ANNs are high-value and low-cost tools in modelling, simulation, control,
condition monitoring, sensor validation and fault diagnosis of different systems. It have high
flexibility and robustness in modeling, simulating and diagnosing the behavior of rotating machines
even in the presence of inaccurate input data. They can provide high computational speed for
complicated tasks that require rapid response such as real-time processing of several simultaneous
signals. ANNs can also be used to improve efficiency and productivity of energy in rotating
equipment
n recent years, with the development of graphics p
rocessors, graphics cards have been widely
used to perform general-purpose calculations. Espec
ially with release of CUDA C
programming languages in 2007, most of the research
ers have been used CUDA C
programming language for the processes which needs
high performance computing.
In this paper, a scaling approach for image segment
ation using level sets is carried out by the
GPU programming techniques. Approach to level sets
is mainly based on the solution of partial
differential equations. The proposed method does no
t require the solution of partial differential
equation. Scaling approach, which uses basic geomet
ric transformations, is used. Thus, the
required computational cost reduces. The use of the
CUDA programming on the GPU has taken
advantage of classic programming as spending time a
nd performance. Thereby results are
obtained faster. The use of the GPU has provided to
enable real-time processing. The developed
application in this study is used to find tumor on
MRI brain images.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
A REVIEW OF IMAGE COMPRESSION TECHNIQUESArlene Smith
This document reviews various image compression techniques used for medical images. It begins by discussing the need for compressing large volumes of medical images generated for storage and transmission purposes. It then summarizes several key lossless and lossy compression techniques that have been proposed in other research papers, including techniques using wavelet transforms, DCT, and Huffman encoding. The techniques are evaluated based on their advantages like preserving image quality, and limitations like being slow or expensive. Results showed compression ratios from 2.5% to over 40% were achieved without significantly degrading image quality. Overall the document provides an overview of different medical image compression methods and their performance.
The main aim of image compression is to represent the image with minimum number of bits and thus reduce the size of the image. This paper presents a Symbols Frequency based Image Coding (SFIC) technique for image compression. This method utilizes the frequency of occurrence of pixels in an image. A frequency factor, y is used to merge y pixel values that are in the same range. In this approach, the pixel values of the image that are within the frequency factor, y range are clubbed to the least pixel value in the set. As a result, there is omission of larger pixel values and hence the total size of the image reduces and thus results in higher compression ratio. It is noticed that the selection of the frequency factor, y has a great influence on the performance of the proposed scheme. However, higher PSNR values are obtained since the omitted pixels are mapped to pixels in the similar range. The proposed approach is analyzed with quantization and without quantization. The results are analyzed. This proposed new compression model is compared with Quadtree-segmented AMBTC with Bit Map Omission. From the experimental analysis it is observed that the proposed SFIC image compression scheme with both lossless and lossy techniques outperforms AMBTC-QTBO. Hence, the proposed new compression model is a better choice for lossless and lossy compression applications.
Survey of Hybrid Image Compression Techniques IJECEIAES
A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a highquality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
Compressed Medical Image Transfer in Frequency DomainCSCJournals
A common approach to the medical image compression algorithm begins by separating the region of interests from the background of the medical images and then lossless and lossy compression schemes are applied on the ROI part and background respectively. The compressed files (ROI and background) are now transmitted through different media of communications (local host, Intranet and Internet) between the server and clients. In this work, a medical image transfer coding scheme based on lossless Haar wavelet transforms method is proposed. At first, the proposed scheme is tested on Intranet (for both RoI and background) in order to compare its results with Internet tests. An adaptive quantization algorithm is used to apply on quasi lossless ROI wavelet coefficients while a uniform quantization is used to apply on lossy background wavelet coefficients. Finally, the retained quantization indices are entropy encoded with an optimal variable coding algorithm. The test results have indicated that the performance of the proposed MITC via Intranet is much better than via Internet in terms of transferring time, while the quality of the reconstructed medical image remains constant despite the medium of communication. For best adopted parameters, a compressed medical image file (760 KB „³ 19.38 KB) is transmitted through Internet (bandwidth= 1024 kbps) with transfer time = 0.156 s while the uncompressed file is sent with transfer time = 6.192 s.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
This document presents a digital image watermarking technique using Hadamard transform. The key steps are:
1. The image is divided into 8x8 blocks and Hadamard transform applied to get coefficients. Breadth-first search is used to select efficient embedding points among the positive coefficients.
2. Prime numbered blocks are avoided for embedding. Two points in the first column are selected for embedding the watermark.
3. Cox's equation is used to embed the watermark in the host image coefficients. Inverse Hadamard transform generates the watermarked image.
4. To extract the watermark, the process is reversed using the visited matrix of points. Experiments show the technique provides better image
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Computation Complexity Reduction of 2-D Gaussian FilterIRJET Journal
This document discusses reducing the computational complexity of a 2D Gaussian filter for image smoothing. It begins with an abstract that notes 2D Gaussian filters are commonly used for image smoothing but require heavy computational resources. It then proposes using fixed-point arithmetic rather than floating point to implement the filter on an FPGA, which can increase efficiency and decrease area and complexity. The document is divided into sections that cover the theory behind image filtering, image smoothing and sharpening, quality metrics for evaluation, and an energy scalable Gaussian smoothing filter architecture. It concludes by discussing results and benefits of implementing the filter using fixed-point arithmetic on an FPGA.
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
This paper presents a new technique able to provide a very good compression ratio in preserving the quality of the important components of the image called main objects. It focuses on applications where the image is of large size and consists of an object or a set of objects on background such as identity photos. In these applications, the background of the objects is in general uniform and represents insignificant information for the application. The results of this new techniques show that is able to achieve an average compression ratio of 29% without any degradation of the quality of objects detected in the images. These results are better than the results obtained by the lossless techniques such as JPEG and TIF techniques.
This document compares the performance of three lossless image compression techniques: Run Length Encoding (RLE), Delta encoding, and Huffman encoding. It tests these algorithms on binary, grayscale, and RGB images to evaluate compression ratio, storage savings percentage, and compression time. The results found that Delta encoding achieved the highest compression ratio and storage savings, while Huffman encoding had the fastest compression time. In general, the document evaluates and compares the performance of different lossless image compression algorithms.
A hybrid predictive technique for lossless image compressionjournalBEEI
Compression of images is of great interest in applications where efficiency with respect to data storage or transmission bandwidth is sought.The rapid growth of social media and digital networks have given rise to huge amount of image data being accessed and exchanged daily. However, the larger the image size, the longer it takes to transmit and archive. In other words, high quality images require huge amount of transmission bandwidth and storage space. Suitable image compression can help in reducing the image size and improving transmission speed. Lossless image compression is especially crucial in fields such as remote sensing healthcare network, security and military applications as the quality of images needs to be maintained to avoid any errors during analysis or diagnosis. In this paper, a hybrid prediction lossless image compression algorithm is proposed to address these issues. The algorithm is achieved by combining predictive Differential Pulse Code Modulation (DPCM) and Integer Wavelet Transform (IWT). Entropy and compression ratio calculation are used to analyze the performance of the designed coding. The analysis shows that the best hybrid predictive algorithm is the sequence of DPCM-IWT-Huffman which has bits sizes reduced by 36%, 48%, 34% and 13% for tested images of Lena, Cameraman, Pepper and Baboon, respectively.
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
The document discusses image enhancement techniques for use in multiprocessor systems-on-chip (MPSoC). It reviews existing image enhancement methods implemented on FPGA and identifies limitations like low accuracy. The paper proposes using advanced bus architectures like AMBA/AXI in MPSoC to improve communication between memory and input medical images for enhancement, reducing heterogeneity effects. It summarizes various image enhancement algorithms that could be implemented on reconfigurable FPGA hardware for use in MPSoC, including brightness control, contrast stretching, histogram equalization, and edge detection techniques.
IRJET- Efficient JPEG Reconstruction using Bayesian MAP and BFMTIRJET Journal
This document discusses efficient JPEG reconstruction using Bayesian MAP and BFMT. It proposes using a Bayesian maximum a posteriori probability approach with an alternating direction method of multipliers iterative optimization algorithm. Specifically, it uses a learned frame prior and models the quantization noise as Gaussian. It also proposes using bilateral filter and its method noise thresholding using wavelets for image denoising as part of the JPEG reconstruction process. Experimental results show this approach improves reconstruction quality both visually and in terms of signal-to-noise ratio compared to other existing methods.
An Efficient Design Approach of ROI Based DWT Using Vedic and Wallace Tree Mu...IJECEIAES
In digital image processing, the compression mechanism is utilized to enhance the visual perception and storage cost. By using hardware architectures, reconstruction of medical images especially Region of interest (ROI) part using Lossy image compression is a challenging task. In this paper, the ROI Based Discrete wavelet transformation (DWT) using separate Wallace- tree multiplier (WM) and modified Vedic Multiplier (VM) methods are designed. The Lifting based DWT method is used for the ROI compression and reconstruction. The 9/7 filter coefficients are multiplied in DWT using Wallace- tree multiplier (WM) and modified Vedic Multiplier (VM). The designed Wallace tree multiplier works with the parallel mechanism using pipeline architecture results with optimized hardware resources, and 8x8 Vedic multiplier designs improves the ROI reconstruction image quality and fast computation. To evaluate the performance metrics between ROI Based DWT-WM and DWT-VM on FPGA platform, The PSNR and MSE are calculated for different Brain MRI images, and also hardware constraints include Area, Delay, maximum operating frequency and power results are tabulated. The proposed model is designed using Xilinx platform using Verilog-HDL and simulated using ModelSim and Implemented on Artix-7 FPGA device.
This survey propose a Novel Joint Data-Hiding and
Compression Scheme (JDHC) for digital images using side match
vector quantization (SMVQ) and image in painting. In this
JDHC scheme image compression and data hiding scheme are
combined into a single module. On the client side, the data should
be hided and compressed in sub codebook such that remaining
block except left and top most of the image. The data hiding and
compression scheme follows raster scanning order i.e. block by
block on row basis. Vector Quantization used with SMVQ and
Image In painting for complex block to control distortion and
error injection. The receiver side process is based on two
methods. First method divide the received image into series of
blocks the receiver achieve hided data and original image
according to the index value in the segmented block. Second
method use edge based harmonic in painting is used to get
original image if any loss in the image.
Artificial Neural Network (ANN) is a fast-growing method which has been used in different
industries during recent years. The main idea for creating ANN which is a subset of artificial
intelligence is to provide a simple model of human brain in order to solve complex scientific and
industrial problems. ANNs are high-value and low-cost tools in modelling, simulation, control,
condition monitoring, sensor validation and fault diagnosis of different systems. It have high
flexibility and robustness in modeling, simulating and diagnosing the behavior of rotating machines
even in the presence of inaccurate input data. They can provide high computational speed for
complicated tasks that require rapid response such as real-time processing of several simultaneous
signals. ANNs can also be used to improve efficiency and productivity of energy in rotating
equipment
n recent years, with the development of graphics p
rocessors, graphics cards have been widely
used to perform general-purpose calculations. Espec
ially with release of CUDA C
programming languages in 2007, most of the research
ers have been used CUDA C
programming language for the processes which needs
high performance computing.
In this paper, a scaling approach for image segment
ation using level sets is carried out by the
GPU programming techniques. Approach to level sets
is mainly based on the solution of partial
differential equations. The proposed method does no
t require the solution of partial differential
equation. Scaling approach, which uses basic geomet
ric transformations, is used. Thus, the
required computational cost reduces. The use of the
CUDA programming on the GPU has taken
advantage of classic programming as spending time a
nd performance. Thereby results are
obtained faster. The use of the GPU has provided to
enable real-time processing. The developed
application in this study is used to find tumor on
MRI brain images.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
A REVIEW OF IMAGE COMPRESSION TECHNIQUESArlene Smith
This document reviews various image compression techniques used for medical images. It begins by discussing the need for compressing large volumes of medical images generated for storage and transmission purposes. It then summarizes several key lossless and lossy compression techniques that have been proposed in other research papers, including techniques using wavelet transforms, DCT, and Huffman encoding. The techniques are evaluated based on their advantages like preserving image quality, and limitations like being slow or expensive. Results showed compression ratios from 2.5% to over 40% were achieved without significantly degrading image quality. Overall the document provides an overview of different medical image compression methods and their performance.
The main aim of image compression is to represent the image with minimum number of bits and thus reduce the size of the image. This paper presents a Symbols Frequency based Image Coding (SFIC) technique for image compression. This method utilizes the frequency of occurrence of pixels in an image. A frequency factor, y is used to merge y pixel values that are in the same range. In this approach, the pixel values of the image that are within the frequency factor, y range are clubbed to the least pixel value in the set. As a result, there is omission of larger pixel values and hence the total size of the image reduces and thus results in higher compression ratio. It is noticed that the selection of the frequency factor, y has a great influence on the performance of the proposed scheme. However, higher PSNR values are obtained since the omitted pixels are mapped to pixels in the similar range. The proposed approach is analyzed with quantization and without quantization. The results are analyzed. This proposed new compression model is compared with Quadtree-segmented AMBTC with Bit Map Omission. From the experimental analysis it is observed that the proposed SFIC image compression scheme with both lossless and lossy techniques outperforms AMBTC-QTBO. Hence, the proposed new compression model is a better choice for lossless and lossy compression applications.
Survey of Hybrid Image Compression Techniques IJECEIAES
A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a highquality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.
A VIDEO COMPRESSION TECHNIQUE UTILIZING SPATIO-TEMPORAL LOWER COEFFICIENTSIAEME Publication
With the advancement of communication in recent trends, video compression plays an important role in the transmission of information on social networking and for storage with limited memory capacity. Also the inadequate bandwidth for transmission and lower quality make video compression a serious phenomenon to consider in the field of communication. There is a need to improve the video compression process which can encode the video data with low computational complexity with better quality along with maintaining speed. In this work, a new technique is developed based on the block processing utilizing the lower coefficients between frames.
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
The document summarizes a proposed technique for lossless image compression that uses a hybrid of data folding and arithmetic coding. It begins by explaining lossless compression and existing techniques like data folding, arithmetic coding, and their individual use in compression. It then describes the proposed technique, which applies multiple iterations of row and column data folding to an image, followed by arithmetic coding. Test results on various images show the hybrid technique achieves better compression ratios and fewer bits per pixel than existing methods. The technique provides lossless compression but with increased computational time compared to other lossless methods.
This document proposes a system to implement common image processing routines for both single and distributed processors. It will develop algorithms for routines like thresholding, brightness/contrast adjustments, inversion, smoothing, edge detection, and morphological operations. The system will execute the routines sequentially on one processor and in parallel across multiple processors. It will analyze and compare the execution times to evaluate the performance benefits of single versus distributed computing for image processing.
Channel encoding system for transmitting image over wireless network IJECEIAES
Various encoding schemes have been introduced till date focusing on an effective image transmission scheme in presence of error-prone artifacts in wireless communication channel. Review of existing schemes of channel encoding systems infer that they are mostly inclined on compression scheme and less over problems of superior retention of signal retention as they lacks an essential consideration of network states. Therefore, the proposed manuscript introduces a cost effective lossless encoding scheme which ensures resilient transmission of different forms of images. Adopting an analytical research methodology, the modeling has been carried out to ensure that a novel series of encoding operation be performed over an image followed by an effective indexing mechanism. The study outcome confirms that proposed system outshines existing encoding schemes in every respect.
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGEijcsity
It is shown that neural networks (NNs) achieve excellent performances in image compression and reconstruction. However, there are still many shortcomings in the practical application, which eventually lead to the loss of neural network image processing ability. Based on this, a joint framework based on neural network and scale compression is proposed in this paper. The framework first encodes the incoming PNG image information, and then the image is converted into binary input decoder to reconstruct the intermediate state image, next, we import the intermediate state image into the zooming compressor and repressurize it, and reconstruct the final image. From the experimental results, this method can better process the digital image and suppress the reverse expansion problem, and the compression effect can be improved by 4 to 10 times as much as that of using RNN alone, showing better ability in the application. In this paper, the method is transmitted over a digital image, the effect is far better than the existing compression method alone, the Human visual system cannot feel the change of the effect.
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGEijcsity
It is shown that neural networks (NNs) achieve excellent performances in image compression and
reconstruction. However, there are still many shortcomings in the practical application, which eventually
lead to the loss of neural network image processing ability. Based on this, a joint framework based on
neural network and scale compression is proposed in this paper. The framework first encodes the incoming
PNG image information, and then the image is converted into binary input decoder to reconstruct the
intermediate state image, next, we import the intermediate state image into the zooming compressor and repressurize it, and reconstruct the final image. From the experimental results, this method can better process the digital image and suppress the reverse expansion problem, and the compression effect can be improved by 4 to 10 times as much as that of using RNN alone, showing better ability in the application. In this paper, the method is transmitted over a digital image, the effect is far better than the existing compression method alone, the Human visual system cannot feel the change of the effect.
It is shown that neural networks (NNs) achieve excellent performances in image compression and reconstruction. However, there are still many shortcomings in the practical application, which eventually lead to the loss of neural network image processing ability. Based on this, a joint framework based on neural network and scale compression is proposed in this paper. The framework first encodes the incoming PNG image information, and then the image is converted into binary input decoder to reconstruct the intermediate state image, next, we import the intermediate state image into the zooming compressor and re-pressurize it, and reconstruct the final image. From the experimental results, this method can better process the digital image and suppress the reverse expansion problem, and the compression effect can be improved by 4 to 10 times as much as that of using RNN alone, showing better ability in the application. In this paper, the method is transmitted over a digital image, the effect is far better than the existing compression method alone, the Human visual system cannot feel the change of the effect.
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGEijcsity
This document summarizes a research paper on a multiple reconstruction compression framework for PNG images. The framework first encodes incoming PNG images using a neural network compressor. It then decodes and reconstructs an intermediate state image, which is input into an image scaling compressor to further compress it. Experimental results showed this method improved compression rates 4 to 10 times over using a neural network alone, with no noticeable degradation in image quality to the human visual system.
Digital image enhancement by brightness and contrast manipulation using Veri...IJECEIAES
This document describes a proposed method for digitally enhancing images using brightness and contrast manipulation implemented in Verilog hardware description language. The method aims to improve low quality images impacted by low exposure and haze. It involves converting image files to hexadecimal, manipulating pixel brightness and contrast values using algorithms, and converting the output back to image format. The algorithms adjust brightness by adding a constant to pixel values and modify contrast by stretching the dynamic range of values. The method is evaluated on vehicle registration plate images and shows improvements over other enhancement methods based on quantitative and qualitative metrics.
Combining 3D run-length encoding coding and searching techniques for medical...IJECEIAES
The field of image compression became a mandatory tool to face the increasing and advancing production of medical images, besides the inevitable need for smaller size of medical images in telemedicine systems. In spite of its simplicity, run-length encoding (RLE) technique is a considerably effective and practical tool in the field of lossless image compression. Such that, it is widely recommended for 2D space that utilizes common searching techniques like linear and zigzag. This paper adopts a new algorithm taking advantage of the potential simplicity of the run-length algorithm to contribute a volumetric RLE approach for binary medical data in the 3D form. The proposed volumetric-RLE (VRLE) algorithm differs from the 2D RLE approach utilizing correlations of intra-slice only, which is used for compressing binary medical data utilizing voxel-correlations of inter-slice. Furthermore, several forms of scanning are used to extending proposed technique like Hilbert and Perimeter, which determines the best possible procedure of scanning suitable for data morphology considering the segmented organ. This work employs proposed algorithm on four image datasets to get as sufficient as possible evaluation. Experimental results and benchmarking illustrate that the performance of the proposed technique surpasses other state-of-the-art techniques with 1:30 enhancement on average.
An improved image compression algorithm based on daubechies wavelets with ar...Alexander Decker
This document summarizes an academic article that proposes a new image compression algorithm using Daubechies wavelets and arithmetic coding. It first discusses existing image compression techniques and their limitations. It then describes the proposed algorithm, which applies Daubechies wavelet transform followed by 2D Walsh wavelet transform on image blocks and arithmetic coding. Results show the proposed method achieves higher compression ratios and PSNR values than existing algorithms like EZW and SPIHT. Future work aims to improve results by exploring different wavelets and compression techniques.
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
1) The document proposes a new simple method to reduce visual block artifacts in images compressed using DCT (used in JPEG) for urban surveillance systems.
2) The method smooths only the connection edges between adjacent blocks while keeping other image areas unchanged.
3) Simulation results show the proposed method achieves better image quality as measured by PSNR compared to median and wiener filters, while using significantly less computational resources.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Similar to Novel hybrid framework for image compression for supportive hardware design of boosting compression (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
We have designed & manufacture the Lubi Valves LBF series type of Butterfly Valves for General Utility Water applications as well as for HVAC applications.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Properties of Fluids, Fluid Statics, Pressure MeasurementIndrajeet sahu
Properties of Fluids: Density, viscosity, surface tension, compressibility, and specific gravity define fluid behavior.
Fluid Statics: Studies pressure, hydrostatic pressure, buoyancy, and fluid forces on surfaces.
Pressure at a Point: In a static fluid, the pressure at any point is the same in all directions. This is known as Pascal's principle. The pressure increases with depth due to the weight of the fluid above.
Hydrostatic Pressure: The pressure exerted by a fluid at rest due to the force of gravity. It can be calculated using the formula P=ρghP=ρgh, where PP is the pressure, ρρ is the fluid density, gg is the acceleration due to gravity, and hh is the height of the fluid column above the point in question.
Buoyancy: The upward force exerted by a fluid on a submerged or partially submerged object. This force is equal to the weight of the fluid displaced by the object, as described by Archimedes' principle. Buoyancy explains why objects float or sink in fluids.
Fluid Pressure on Surfaces: The analysis of pressure forces on surfaces submerged in fluids. This includes calculating the total force and the center of pressure, which is the point where the resultant pressure force acts.
Pressure Measurement: Manometers, barometers, pressure gauges, and differential pressure transducers measure fluid pressure.
ELS: 2.4.1 POWER ELECTRONICS Course objectives: This course will enable stude...Kuvempu University
Introduction - Applications of Power Electronics, Power Semiconductor Devices, Control Characteristics of Power Devices, types of Power Electronic Circuits. Power Transistors: Power BJTs: Steady state characteristics. Power MOSFETs: device operation, switching characteristics, IGBTs: device operation, output and transfer characteristics.
Thyristors - Introduction, Principle of Operation of SCR, Static Anode- Cathode Characteristics of SCR, Two transistor model of SCR, Gate Characteristics of SCR, Turn-ON Methods, Turn-OFF Mechanism, Turn-OFF Methods: Natural and Forced Commutation – Class A and Class B types, Gate Trigger Circuit: Resistance Firing Circuit, Resistance capacitance firing circuit.
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Determination of Equivalent Circuit parameters and performance characteristic...
Novel hybrid framework for image compression for supportive hardware design of boosting compression
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 11, No. 3, June 2021, pp. 1985~1993
ISSN: 2088-8708, DOI: 10.11591/ijece.v11i3.pp1985-1993 1985
Journal homepage: http://ijece.iaescore.com
Novel hybrid framework for image compression for supportive
hardware design of boosting compression
Premachand D. R., U. Eranna
Department of Electronics and Communication Engineering, Ballari Institute of Technology and Management, Ballari,
India
Article Info ABSTRACT
Article history:
Received Nov 10, 2019
Revised Oct 12, 2020
Accepted Nov 1, 2020
Performing the image compression over the resource constrained hardware is
quite a challenging task. Although, there has been various approaches being
carried out towards image compression considering the hardware aspect of it,
but still there are problems associated with the memory acceleration
associated with the entire operation that downgrade the performance of the
hardware device. Therefore, the proposed approach presents a cost effective
image compression mechanism which offers lossless compression using a
unique combination of the non-linear filtering, segmentation, contour
detection, followed by the optimization. The compression mechanism adapts
analytical approach for significant image compression. The execution of the
compression mechanism yields faster response time, reduced mean square
error, improved signal quality and significant compression ratio performance.
Keywords:
Encoding
Hardware acceleration
Image compression
Segmentation
VLSI architecture This is an open access article under the CC BY-SA license.
Corresponding Author:
Premachand D. R.
Department of Electronics and Communication Engineering
Ballari Institute of Technology and Management
Ballari, India
Email: premchandbitm@gmail.com
1. INTRODUCTION
Performing different forms of processing over a complex form of an image is a challenging task
when it comes to hardware-based implementation. In this aspect, image compression is one such process that
significantly offers overhead in the hardware realization. All the problems starts with the memory system of
the hardware component while processing large number of image pixels. It is believed that the performance
of the memory management solely depends upon the pattern of its accessing mechanism. It is essential that
analysis of the traces of the memory is required to be investigated in order to improve the memory
management of the hardware. The image compression mechanism aims to optimize the size of image for
smooth transmission and storage [1]. The presence of artifacts in the images can also remove through image
compression [2]. But, there is challenging situation in achiving significant image compression for the area
and enegy constrained devices [3].
In order to perform an efficient image compression, there are various conditions that are required to
be catered up. The primary conditions to be satisfied are that the usage of lossless compression is important
for resisting loss of traced data in memory system. The secondary condition to be satisfied is that there
should be higher degree of throughput and increased speed for higher bandwidth suportabilty. With the
increasing frequency of operation and the higher data bandwidth, there is a need of higher rate of throughput
too. The main reason behind this is that in order to offer better analysis of memory traces there is a need of
developing acceleration system of the compression with respect to hardware modeling and not the software-
based symptomatic approach. Another important finding is that software-based compression approach is
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 3, June 2021 : 1985 - 1993
1986
always slower in comparison to the hardware-based compression approach that downgrades the compression
performance. Therefore, there is a significant tradeoff between hardware-based approach and software-based
approach in order to achieve the target of compression efficiency. The increased use of Internet-of-Things
(IoT) can lead to more usage of resource constraint devices connected each other through different networks
[4]. The image compression can be achievd through software/hardware based approaches [5-7].
Most of the software approaches are depended on vector multiplication and other matrix operations
[8]. But, these operations are not feasible for the resource constrained devices due to increased computational
complexity. The existing mechanisms are not much efficient to handle these problems and offer better
support to provide hardware efficient solution for compression techniques [9, 10]. Hence, this paper offers a
cost effective solution providing balance between signal quality and compression through hybrid
compression technique in wireless sensor network (WSN). The section 2 of the paper explains the existing
research analysis and problem description in section 3. The proposed technique along with the algorithm
implementation is elaborated in section 5. The results discussion with comparative analysis is highlighted in
section 6 and conclusion in section 7.
The study of various existing compression techniques for digital image is discussed in this section.
In Cabronero et al. [11], a Hybrid compression approach is adapted for transformation-operation is carried
out over colored mosaic image. A subjective optimization process with supervised learning algorithms is
introduced in Amirjanov and Dimililer [12] over medical image. In Wang et al. [13], a line-buffer scheme is
introduced for compression over the processing circuits. A work of Chen et al. [14] provided a color filter
array along with an improved Golomb-Rice code to achieve significant on-chip compression. A hardware
acceleration approach for integrated circuits compression is introduced in Halawani et al. [15]. Choi et al.
[16] have considered hardware throughput approach for on-chip compression. A compression scheme for
image sensors is presented in Kaur et al. [17] by using interpolation operation over the on-chip hardware
design. In Onishi et al. [18], a multimedia encoder approach is introduced over single-chip design and
achieved scalable compression process. The implementation of transform-based compression scheme for
compression is described in Zhu et al. [19] by using block coding.
A VLSI architecture with tree-based approach for towards image encoder based compression is
given in Hsieh et al. [20]. An energy efficient implementation of VLSI architecture for image compression is
introduced in Chen et al. [21] considering ultra-high definition frame. Gradient-based approach towards
improving compression performance is seen in the work of Kim et al. [22]. Lucas et al. [23] have used
predictive-based scheme for the purpose of performing lossless compression scheme. The work carried out
by the Yin et al. [24] have used an integral histogram based approach to accelerate the mechanism of the
feature extraction process for controlling better computational operation over the memory access in
hardware. Parikh et al. [25] have used latest encoders for performing maximized compression performance
for the complex form of an image. The study has used the partitioning process for better memory
management during compression over parallel architecture. Adoption of color filter array was also proven to
be better solution for performing compression especially in the case of application that demands live
compression operation to be taking place. Work in this direction has been carried out by Chen et al. [26]
where Golomb Rice is considered for implementation for enhancing the compression efficiency. The
problems associated with color compression is carried out by Peter et al. [27] where the authors have used
diffusion-based approach along with inpainting phenomenon for offering superior quality of reconstructed
image.
Zhang et al. [28] where a definitive transformation scheme has been presented. This approach has
been presented for the upgrading the existing JPEG compression performance. Therefore, there are various
approaches towards image compression where hardware approach has been considered. Study focusing over
the energy efficiency as well as constraint area of the ciruit design connecting with the concept of image
compression has been discussed by Zeinolabedin et al. [29]. Kim et al. [30] has used encoding-based
approach for carrying the compression operation where the preliminary encoding computes the bitstream
length while the secondary encoder is responsible for generation of streams of the data packets over VLSI
architecture. Study considering the VLSI architecture towards the compression approach has been carried out
by the next section outlines the research issues.
The overview of existing studies suggests that the compression schemes are failed to adopt the VLSI
implementation effectively. Some of the research issues are:
- Lack of hardware efficient schemes to minimize computational complexity
- Least consideration of filtering process in low-quality image compression
- Complete selection of signal processing is not considered in previoius works
- Insignificant selection of compression features to yield cost effectiveness solution
The study develops the computational model to address the above stated research problems. The
flow of the proposed compression approach is illustrated in Figure 1. The proposed system adapts analytical
3. Int J Elec & Comp Eng ISSN: 2088-8708
Novel hybrid framework for image compression for supportive hardware design of ... (Premachand D. R.)
1987
approach where input image is processed for cost effective image compression. The significant aspect of this
solution is that it follows hybrid compression technique of lossless and lossy compression over selected
image sector and compression over regions of the image respectively.
The proposed system makes use of the simplified modeling that initiates by performing filtering
operation. This operation assists in eliminating possibility of any further artifacts that could be possibly
present during the transmission process of the image in the wireless communication medium. The filtered
image is subjected to next process where uniquely not the complete image is subjected to the compression.
The complete image is hypothetically classified into two parts where one sector of the image is considered to
be critically important whereas other part of the image is not that important. From application as well as
communication viewpoint, it is essential that important sector the image is subjected for lossless compression
scheme while the not so important part of an image is subjected to the lossy compression scheme. The
proposed scheme uses contours detections and optimization process. Another essential part of the
implementation of the proposed system is its segmentation process and the contour detection process. This
process contributes towards the extraction of the features that is used for minimizing the dimensions of an
image. This charecteristics of the contours are further optimized where concatenation operation is carried out
over the extracted red green and blue components of an image which maintains similar dimensions of an
image but with more effective information of the contours. The finally obtained optimized contours are
subjected for the wavelet compression this renders the process of compression acceleration swifter over the
hardware. The next section discussion of the all the essential process blocks shown in Figure 1.
Input
Image
Processing Input
Image
Performing
Filtering Operation
Sector-Selective
Compression
Segmentation-
Operation
Contour Detection
Applying Optimized
Contours Detection
Wavelet-based
Compression
Compressed
Image
Figure 1. System architecture
2. DESIGN PRINCIPLE OF IMPLEMENTATION
Design plays a significant role in developing architecture. It should be noted that an effective design
for VLSI architecture demands a suitable computational model that performs management and structurization
of all the processes in efficient manner. This section briefs strategies followed in image compression where it
maintains reconstructed image quality and computational complexity. The quality image is considered to
transmit in the primary assumption and it is characterized with artifacts. The elimination of the artefacts are
considered secondary assumption by upgrading new compression process. The lossless compression is aimed
in tertiary assumption over specific region of image.
2.1. Strategy for Implementation
The proposed system planned stragically to provide the cost effective image compression and obtain
the quality image after decompression. To achive this, some part of the image are considered to select the
sector of the image for compression. The strategy also offers better approach for hardware viewpoint, where
the memory allocations over the hardware image compression is considered that can reduce the complexity.
The algorithm able to provide the compressed image with following steps:
Algorithm of sector-selective compression approach
Input: Input Image (I)
Output: Compressed image (ϕ)
Start
Step-1: initialize I
Step-2: Kf1(I)
Step-3: I4f2(I)
Step-4: BWf3(I4)
Step-5: Rconcatinate (r1 g1 b1)
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 3, June 2021 : 1985 - 1993
1988
Step-6: Crconcatinate (r g b)
Step-7: Cpconcatinate (R G B)
Step-8: ϕf4(Cp)
End
The algorithm is expressed as below:
a. Input image processing: The matrix (I) is generated after processing image for digitization (Step-1) by
converting RGB to greyscale image (image processing stages are illusted in Figure 2.
Input
Image
Digitization
Conversion to
Grayscale
Grayscale
Image
Figure 2. Stages of input image processing
b. Filtering process: This process is applied to eliminates the noise in the image before subjecting to
compression. A function f1(x) is introduced to perform non-linear filtering that also improves the image
processing stage (Step-2) (Non linear filtering stages are given in Figure 3).
Grayscale
Image
Non-linear
Filtering
Filtered
Image
Figure 3. Stages of non-linear filtering
c. Sector-selective compression: This kind of compression is achieved through specific sector of image (I)
and is need to go lossless compression while sector subjected to lossy compression. This combination of
compression technique helps to exteact the significances of both techniques which eliminates the issues of
the compression process. The manual section of the sector is critical concern and it can be eliminated by
formulating the function f2(x) that does the automated sector selection (Step-3). The algorithm applies
involuntary thresholding on image (I) and then converts it into binary form. The process results an image
(I4) after multiplication of threshold image (The process of this process is illustrated in Figure 4).
Grayscale
Image
Global image
thresholding
Multiply
Operation
Obtain Selective
Sector
Figure 4. Stages of sector selective compression
d. Segmentation-operation: To the selected image sectors (I4), the segmentation process is applied (Step-4)
where explicit function f3(x) is used for multi-scale segmentation operation. The function “f3(x)” uses
fuzzy c- clustering for binarized image “I4”. Later, summation of all the pixel labels is performed. Finally,
the segemented image parts are combined through multiplication and it will helps to generate the
segmented image (BW) (The segementation process is represented in Figure 5).
Selective
Sector
Layer-
Segmentation
Segmented
Image
Figure 5. Stages of segmentation operation
5. Int J Elec & Comp Eng ISSN: 2088-8708
Novel hybrid framework for image compression for supportive hardware design of ... (Premachand D. R.)
1989
e. Contour detection: During this stage, a matrix is formed to retain the corner metric obained for the input
image and is considered for feature extraction (Step-5). An index is constructed after finding the positive
corner sets. Further, the inputs and outputs are matched to form the red (r1), green (g1), and blue (b1)
components. The “r1” components are initiated with 255 and other components like “g1” and “b1” are
initalized with 0. Further, these components are concatenated to get the actual corner points “R”. During
each stage maximum regions of the image are identified to detect the peaks of the corners. Further, the
input and outputs are matched by which the new components of the “r”, “g” and “b” components are
concatenated to get the suppressed corner (Cr) points (Step-6). The contour detection algorithm is also
used to multiply the grayscale image with the contour index so that the contours of complete imahe ca be
generated (Stages of this implementation is given in Figure 6).
Segmented
Image
Contour Matrix
Find Red Green
Blue Components
Condition for
positive contours
concatenation
Obtain contour
points
Extract Maximized
region
Index countours
multiplication
Suppressed
Contour Points
Figure 6. Stages of contour detection
f. Optimized contours detection: The performance of the contour detection is optimed in this part of the
implementation. During this, the prior information of contour matrix will be accessed and then an S-
shaped curve function is producing the individual matrix. The values between probability limit is
collected for the concatenation of final contours leading to optimized contour points (Cp) (Step-7). The
final process of compression is to apply wavelets where function f4(x) is considered over the image with
contour points (Cp). The operation helps to generate the final form of compressed image ϕ (Step-8) and
this process is shown in Figure 7.
Segmented
Image
S-shaped curve
membership function
Apply computational
geometry
Perform wavelet
compression
Figure 7. Stages of optimized contours detection
3. RESULT ANALYSIS
The core idea of the outcome analysis is to ensure that proposed computational model should offer
streamlined facilitation for the hardware realization of the image compression. A part of the image is
considered for the lossless compression over different forms of images. The visual results of the filtered
image (after applying non-linear filter), sector image, multi-scale segmented image, contour detection and
optimized counter is given in Figure 8. The analysis of the system is performed for both selective regions and
for complete image by considering Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and
Compression Ratio (as represented in Figures 9-11 respectively.
7. Int J Elec & Comp Eng ISSN: 2088-8708
Novel hybrid framework for image compression for supportive hardware design of ... (Premachand D. R.)
1991
Figure 9. PSNR performance Figure 10. Mean squared error performance
Figure 11. Compression ratio
The analysis of above factors indicates that the PSNR performance of system is improved in SR than
FI. The signal quality of the system also improved at reduced dimension. The algorithm takes about
0.9877secs to respond to the execution which is considered as instantaneous to offer cost effective solution
for the compression operations.
4. CONCLUSION
This paper has presented a discussion of the computational model for facilitating hardware
acceleration while performing image compression. The significant contribution of this paper is to bridge the
trade-off beween the hardware-based and software-based compression acceleration process with respect to
the VLSI architecture. The contributions of the proposed system towards the image compression are as
follows: i) the proposed system is cost effective as it doesn’t use complete resource for compression as only
the selective sector is subjected for the lossless compression, ii) the proposed system is also cost effective as
the compression is carried out over the lightweight contour feature that not only offers storage optimization
but also retains better signal quality, and iii) the implementation of the proposed system doesn’t include
maximum dependencies of resource to carry out the task.
REFERENCES
[1] T. Lin and P. Hao, “Compound image compression for real-time computer screen image transmission,” IEEE
transactions on Image Processing, vol. 14, no. 8, pp. 993–1005, 2005.
[2] H. G. Myung, J. Lim, and D. J. Goodman, “Single carrier FDMA for uplink wireless transmission,” IEEE
Vehicular Technology Magazine, vol. 1, no. 3, pp. 30–38, 2006.
[3] N. Javaid, Z. Abbas, M. S. Fareed, Zahoor Ali Khan, and N. Alrajeh, “M-ATTEMPT: A new energy-efficient
routing protocol for wireless body area sensor networks,” Procedia Computer Science, vol. 19, pp. 224–231, 2013.
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 3, June 2021 : 1985 - 1993
1992
[4] B. Guo, D. Zhang, Z. Wang, Z. Yu, and X. Zhou, “Opportunistic IoT: Exploring the harmonious interaction
between human and the internet of things,” Journal of Network and Computer Applications, vol. 36, no. 6,
pp. 1531–1539, 2013.
[5] L. Atzori, A. Iera, and G. Morabito, “Siot: Giving a social structure to the internet of things,” IEEE communications
letters, vol. 15, no. 11, pp. 1193–1195, 2011.
[6] R. M. Dyas and B. Tang, “Software, method and apparatus for rate controlled image compression,” U.S. Patent
6,504,494, Jan. 7, 2003.
[7] S. J. Wee and M. P. Schuyler, “Image compression featuring selective re-use of prior compression data,” U.S.
Patent 6,697,061, Feb. 24, 2004.
[8] S. Chatterjee and S. Sen, “Cache-efficient matrix transposition,” in Proceedings Sixth International Symposium on
High-Performance Computer Architecture, HPCA-6 (Cat. No. PR00550), 2000, pp. 195–205.
[9] J. L. Bentley, D. D. Sleator, R. E. Tarjan, and V. K. Wei, “A locally adaptive data compression scheme,”
Communications of the ACM, vol. 29, no. 4, pp. 320–330, 1986.
[10] A. Jas, J. Ghosh-Dastidar, M. E. Ng, and N. A. Touba, “An efficient test vector compression scheme using selective
Huffman coding,” IEEE transactions on computer-aided design of integrated circuits and systems, vol. 22, no. 6,
pp. 797–806, 2003.
[11] M. Hernández-Cabronero, V. Sanchez, I. Blanes, F. Aulí-Llinàs, M. W. Marcellin and J. Serra-Sagristà, “Mosaic-
Based Color-Transform Optimization for Lossy and Lossy-to-Lossless Compression of Pathology Whole-Slide
Images,” in IEEE Transactions on Medical Imaging, vol. 38, no. 1, pp. 21–32, Jan. 2019.
[12] K. Dimililer and A. Amircanov, “Image Compression System with an Optimisation of Compression Ratio,” IET
Image Processing, vol. 13, 2019.
[13] H. Wang, T. Wang, L. Liu, H. Sun, and N. Zheng, “Efficient Compression-Based Line Buffer Design for
Image/Video Processing Circuits,” in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 27,
no. 10, pp. 2423–2433, Oct. 2019.
[14] C. Chen, S. Chen, C. Lioa, and P. A. R. Abu, “Lossless CFA Image Compression Chip Design for Wireless
Capsule Endoscopy,” in IEEE Access, vol. 7, pp. 107047–107057, 2019.
[15] Y. Halawani, B. Mohammad, M. Al-Qutayri, and S. F. Al-Sarawi, “Memristor-Based Hardware Accelerator for
Image Compression,” in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 26, no. 12,
pp. 2749–2758, Dec. 2018.
[16] J. Choi, B. Kim, H. Kim, and H. Lee, “A High-Throughput Hardware Accelerator for Lossless Compression of a
DDR4 Command Trace,” in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 27, no. 1,
pp. 92–102, Jan. 2019.
[17] A. Kaur, D. Mishra, S. Jain, and M. Sarkar, “Content Driven On-Chip Compression and Time Efficient
Reconstruction for Image Sensor Applications,” in IEEE Sensors Journal, vol. 18, no. 22, pp. 9169–9179, Nov.
2018.
[18] T. Onishi et al., “A Single-Chip 4K 60-fps 4:2:2 HEVC Video Encoder LSI Employing Efficient Motion
Estimation and Mode Decision Framework With Scalability to 8K,” in IEEE Transactions on Very Large Scale
Integration (VLSI) Systems, vol. 26, no. 10, pp. 1930–1938, Oct. 2018.
[19] Z. H. Zhu, X. Meng, J. Zhou, and B. Zeng, “Compression-Dependent Transform-Domain Downward Conversion
for Block-Based Image Coding,” in IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2635–2649, Jun.
2018.
[20] J. Hsieh, M. Shih, and X. Huang, “Algorithm and VLSI Architecture Design of Low-Power SPIHT Decoder for
mHealth Applications,” in IEEE Transactions on Biomedical Circuits and Systems, vol. 12, no. 6, pp. 1450–1457,
Dec. 2018.
[21] Q. Chen, H. Sun, and N. Zheng, “Worst Case Driven Display Frame Compression for Energy-Efficient Ultra-HD
Display Processing,” in IEEE Transactions on Multimedia, vol. 20, no. 5, pp. 1113–1125, May. 2018.
[22] K. Kim, C. Lee, and H. Lee, “A Sub-Pixel Gradient Compression Algorithm for Text Image Display on a Smart
Device,” in IEEE Transactions on Consumer Electronics, vol. 64, no. 2, pp. 231–239, May. 2018.
[23] L. F. R. Lucas, N. M. M. Rodrigues, L. A. da Silva Cruz, and S. M. M. de Faria, “Lossless Compression of Medical
Images Using 3-D Predictors,” in IEEE Transactions on Medical Imaging, vol. 36, no. 11, pp. 2250–2260, Nov.
2017.
[24] S. Yin, P. Ouyang, T. Chen, L. Liu, and S. Wei, “A Configurable Parallel Hardware Architecture for Efficient
Integral Histogram Image Computing,” in IEEE Transactions on Very Large Scale Integration (VLSI) Systems,
vol. 24, no. 4, pp. 1305–1318, Apr. 2016.
[25] S. S. Parikh, D. Ruiz, H. Kalva, G. Fernández-Escribano, and V. Adzic, “High Bit-Depth Medical Image
Compression With HEVC,” in IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 2, pp. 552–560,
Mar. 2018.
[26] S. Chen, T. Liu, C. Shen, and M. Tuan, “VLSI Implementation of a Cost-Efficient Near-Lossless CFA Image
Compressor for Wireless Capsule Endoscopy,” in IEEE Access, vol. 4, pp. 10235–10245, 2016.
[27] P. Peter, L. Kaufhold, and J. Weickert, “Turning Diffusion-Based Image Colorization Into Efficient Color
Compression,” in IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 860–869, Feb. 2017.
[28] S. Zhang, X. Tian, C. Xiong, and J. Tian, “Unified VLSI architecture for photo core transform used in JPEG XR,”
in Electronics Letters, vol. 51, no. 8, pp. 628–630, Apr. 2015.
9. Int J Elec & Comp Eng ISSN: 2088-8708
Novel hybrid framework for image compression for supportive hardware design of ... (Premachand D. R.)
1993
[29] S. M. A. Zeinolabedin, J. Zhou, X. Liu, and T. T. Kim, “An Area- and Energy-Efficient FIFO Design Using Error-
Reduced Data Compression and Near-Threshold Operation for Image/Video Applications,” in IEEE Transactions
on Very Large Scale Integration (VLSI) Systems, vol. 23, no. 11, pp. 2408–2416, Nov. 2015.
[30] S. Kim, M. Kim, J. Kim, and H. Lee, “Fixed-Ratio Compression of an RGBW Image and Its Hardware
Implementation,” in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 6, no. 4,
pp. 484–496, Dec. 2016.
BIOGRAPHIES OF AUTHORS
Premachand D. R., currently working as Associate Professor, Department of Electronics and
Communication Engineering, Ballari Institute of Technology and Management, Karnataka, India.
He has industrial experience of 2 years and teaching experience of 17 years. He has nearly
published 3 journals and 3 conference paper. His areas of interest are in VLSI, Image processing
and Micro electronics.
Dr. U. Eranna, currently working as Professor and HOD in the Department of Electronics and
Communication Engineering, Ballari Institute of Technology and Management. He has published
8 conference papers and 20 journal papers. His area of research was Communication, Control
System.