This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Design of Quadrature Mirror Filter Bank using Particle Swarm Optimization (PSO)IDES Editor
In this paper, the particle swarm optimization
technique is used for the design of a two channel quadrature
mirror filter (QMF) bank. A new method is developed to
optimize the prototype filter response in passband, stopband
and overall filter bank response. The design problem is
formulated as nonlinear unconstrained optimization of an
objective function, which is weighted sum of square of error
in passband, stop band and overall filter bank response at
frequency ( ω=0.5π ). For solving the given optimization
problem, the particle swarm optimization (PSO) technique
is used. As compared to the conventional design techniques,
the proposed method gives better performance in terms of
reconstruction error, mean square error in passband,
stopband, and computational time. Various design examples
are presented to illustrate the benefits provided by the
proposed method.
Fractal Image Compression of Satellite Color Imageries Using Variable Size of...CSCJournals
Fractal image compressions of Color Standard Lena and Satellite imageries have been carried out for the variable size range block method. The image is partitioned by considering maximum and minimum size of the range block and transforming the RGB color image into YUV form. Affine transformation and entropy coding are applied to achieve fractal compression. The Matlab simulation has been carried out for three different cases of variable range block sizes. The image is reconstructed using iterative functions and inverse transforms. The results indicate that both color Lena and Satellite imageries with R max = 16 and R min = 8, shows higher Compression ratio (CR) and good Peak Signal to Noise Ratios (PSNR). For the color standard Lena image the achievable CR~13.9 and PSNR ~25.9 dB, for Satellite rural image of CR~ 16 and PSNR ~ 23 and satellite urban image CR~16.4 and PSNR~16.5. The results of the present analysis demonstrate that, for the fractal compression scheme with variable range method applied to both color and gray scale Lena and satellite imageries, show higher CR and PSNR values compared to fixed range block size of 4 and 4 iterations. The results are presented and discussed in the paper.
This document summarizes a presentation on analyzing the performance of maximal ratio combining (MRC) receivers in κ-μ fading channels with channel estimation error. It introduces κ-μ fading channels and MRC receivers. It then discusses the system model considered which involves MRC reception over L branches experiencing κ-μ fading with MMSE channel estimation. It derives an expression for the effective output signal-to-noise ratio (SNR) of the MRC receiver under channel estimation imperfections. It obtains the moment generating function of the effective SNR using the κ-μ distribution and derives an expression to evaluate the bit error probability of MPSK modulations using a half-plane decision method.
This document presents a probabilistic approach to rate control for optimal color image compression and video transmission. It summarizes a rate-distortion model for color image compression that was previously introduced. Based on this model, the document proposes an improved color image compression algorithm that uses the discrete cosine transform and models subband coefficients with a Laplacian distribution. It shows how the rate-distortion model can be used for rate control of compression, allowing target rates or bitrates to be achieved for still images and video sequences. Simulation results demonstrate that the new algorithm outperforms other available compression systems such as JPEG in terms of peak signal-to-noise ratio and a perceptual quality metric.
Weighted Performance comparison of DWT and LWT with PCA for Face Image Retrie...cscpconf
This paper compares the performance of face image retrieval system based on discrete wavelet
transforms and Lifting wavelet transforms with principal component analysis (PCA). These
techniques are implemented and their performances are investigated using frontal facial images
from the ORL database. The Discrete Wavelet Transform is effective in representing image
features and is suitable in Face image retrieval, it still encounters problems especially in
implementation; e.g. Floating point operation and decomposition speed. We use the advantages
of lifting scheme, a spatial approach for constructing wavelet filters, which provides feasible
alternative for problems facing its classical counterpart. Lifting scheme has such intriguing
properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image
retrieval. Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives
less computation and DWT-PCA gives high retrieval rate.. Especially ‘sym2’ wavelet
outperforms well comparing to all other wavelets.
This document summarizes several proposed optimized color transforms for improving the performance of image demosaicing algorithms. It begins with describing a basic sequential demosaicing algorithm that first interpolates the green color component and then reconstructs the red and blue components based on differences from green. The document then proposes alternative color transforms based on optimizing for: 1) energy compactness and non-singularity of the color components, 2) smoothness of the color components by minimizing their high-frequency energy in both the wavelet and Fourier domains, and 3) relative energy in the Fourier domain. Equations for calculating the optimal coefficients for these proposed color transforms are provided.
A lossless color image compression using an improved reversible color transfo...eSAT Journals
Abstract In case of the conventional lossless color image compression methods, the pixels are interleaved from each color component, and they are predicted and finally encoded. In this paper, we propose a lossless color image compression method using hierarchical prediction of chrominance channel pixels and encoded with modified Huffman coding. An input image is chosen and the R, G and B color channel is transform into YCuCv color space using an improved reversible color transform. After that a conventional lossless image coder like CALIC is used to compress the luminance channel Y. The chrominance channel Cu and Cv are encoded with hierarchical decomposition and directional prediction. The effective context modeling for prediction residual is adopted finally. It is seen from the experimental result the proposed method improves the compression performance than the existing method. Keywords: Lossless color image compression, hierarchical prediction, reversible color transform, modified Huffman coding.
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
Design of Quadrature Mirror Filter Bank using Particle Swarm Optimization (PSO)IDES Editor
In this paper, the particle swarm optimization
technique is used for the design of a two channel quadrature
mirror filter (QMF) bank. A new method is developed to
optimize the prototype filter response in passband, stopband
and overall filter bank response. The design problem is
formulated as nonlinear unconstrained optimization of an
objective function, which is weighted sum of square of error
in passband, stop band and overall filter bank response at
frequency ( ω=0.5π ). For solving the given optimization
problem, the particle swarm optimization (PSO) technique
is used. As compared to the conventional design techniques,
the proposed method gives better performance in terms of
reconstruction error, mean square error in passband,
stopband, and computational time. Various design examples
are presented to illustrate the benefits provided by the
proposed method.
Fractal Image Compression of Satellite Color Imageries Using Variable Size of...CSCJournals
Fractal image compressions of Color Standard Lena and Satellite imageries have been carried out for the variable size range block method. The image is partitioned by considering maximum and minimum size of the range block and transforming the RGB color image into YUV form. Affine transformation and entropy coding are applied to achieve fractal compression. The Matlab simulation has been carried out for three different cases of variable range block sizes. The image is reconstructed using iterative functions and inverse transforms. The results indicate that both color Lena and Satellite imageries with R max = 16 and R min = 8, shows higher Compression ratio (CR) and good Peak Signal to Noise Ratios (PSNR). For the color standard Lena image the achievable CR~13.9 and PSNR ~25.9 dB, for Satellite rural image of CR~ 16 and PSNR ~ 23 and satellite urban image CR~16.4 and PSNR~16.5. The results of the present analysis demonstrate that, for the fractal compression scheme with variable range method applied to both color and gray scale Lena and satellite imageries, show higher CR and PSNR values compared to fixed range block size of 4 and 4 iterations. The results are presented and discussed in the paper.
This document summarizes a presentation on analyzing the performance of maximal ratio combining (MRC) receivers in κ-μ fading channels with channel estimation error. It introduces κ-μ fading channels and MRC receivers. It then discusses the system model considered which involves MRC reception over L branches experiencing κ-μ fading with MMSE channel estimation. It derives an expression for the effective output signal-to-noise ratio (SNR) of the MRC receiver under channel estimation imperfections. It obtains the moment generating function of the effective SNR using the κ-μ distribution and derives an expression to evaluate the bit error probability of MPSK modulations using a half-plane decision method.
This document presents a probabilistic approach to rate control for optimal color image compression and video transmission. It summarizes a rate-distortion model for color image compression that was previously introduced. Based on this model, the document proposes an improved color image compression algorithm that uses the discrete cosine transform and models subband coefficients with a Laplacian distribution. It shows how the rate-distortion model can be used for rate control of compression, allowing target rates or bitrates to be achieved for still images and video sequences. Simulation results demonstrate that the new algorithm outperforms other available compression systems such as JPEG in terms of peak signal-to-noise ratio and a perceptual quality metric.
Weighted Performance comparison of DWT and LWT with PCA for Face Image Retrie...cscpconf
This paper compares the performance of face image retrieval system based on discrete wavelet
transforms and Lifting wavelet transforms with principal component analysis (PCA). These
techniques are implemented and their performances are investigated using frontal facial images
from the ORL database. The Discrete Wavelet Transform is effective in representing image
features and is suitable in Face image retrieval, it still encounters problems especially in
implementation; e.g. Floating point operation and decomposition speed. We use the advantages
of lifting scheme, a spatial approach for constructing wavelet filters, which provides feasible
alternative for problems facing its classical counterpart. Lifting scheme has such intriguing
properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image
retrieval. Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives
less computation and DWT-PCA gives high retrieval rate.. Especially ‘sym2’ wavelet
outperforms well comparing to all other wavelets.
This document summarizes several proposed optimized color transforms for improving the performance of image demosaicing algorithms. It begins with describing a basic sequential demosaicing algorithm that first interpolates the green color component and then reconstructs the red and blue components based on differences from green. The document then proposes alternative color transforms based on optimizing for: 1) energy compactness and non-singularity of the color components, 2) smoothness of the color components by minimizing their high-frequency energy in both the wavelet and Fourier domains, and 3) relative energy in the Fourier domain. Equations for calculating the optimal coefficients for these proposed color transforms are provided.
A lossless color image compression using an improved reversible color transfo...eSAT Journals
Abstract In case of the conventional lossless color image compression methods, the pixels are interleaved from each color component, and they are predicted and finally encoded. In this paper, we propose a lossless color image compression method using hierarchical prediction of chrominance channel pixels and encoded with modified Huffman coding. An input image is chosen and the R, G and B color channel is transform into YCuCv color space using an improved reversible color transform. After that a conventional lossless image coder like CALIC is used to compress the luminance channel Y. The chrominance channel Cu and Cv are encoded with hierarchical decomposition and directional prediction. The effective context modeling for prediction residual is adopted finally. It is seen from the experimental result the proposed method improves the compression performance than the existing method. Keywords: Lossless color image compression, hierarchical prediction, reversible color transform, modified Huffman coding.
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
Color Image Watermarking using Cycle Spinning based Sharp Frequency Localized...CSCJournals
This paper describes a new approach for color image watermarking using Cycle Spinning based Sharp Frequency Localized Contourlet Transform and Principal Component Analysis. The approach starts with decomposition of images into various subbands using Contourlet Transform(CT) successively for all the color spaces of both host and watermark images. Then principal components of middle band(x bands) are considered for inserting operation. The ordinary contourlet transform suffers from lack of frequency localization. The localization being the most important criterion for watermarking, the conventional CT is not very suitable for watermarking. This problem of CT is over come by Sharp Frequency Localized Contourlet, but this lacks of translation invariance. Hence the cycle spinning based sharp frequency localized contourlet chosen for watermarking. Embedding at middle level sub bands(x band) preserves the curve nature of edges in the host image hence less disturbance is observed when host and watermark images are compared. This result in very good Peak Signal to Noise Ratio (PSNR) instead of directly adding of mid frequency components of watermark and host images the principal components are only added. Likewise the amount of payload to be added is reduced hence host images get very less distortion. Usage of principal components also helps in fruitful extraction of watermark information from host image hence gives good correlation between input watermark and extracted one. This technique has shown a very high robustness under various intentional and non intentional attacks.
This document describes a hybrid technique for image enhancement that uses both frequency domain and spatial domain techniques. It begins with applying frequency domain techniques like discrete cosine transform (DCT) or discrete wavelet transform (DWT) to separate an image into magnitude and phase spectra. The magnitude is then enhanced before recombining it with the phase using inverse DCT/DWT. Spatial domain techniques like power law or log transforms are then applied to further enhance contrast and brightness. The technique is evaluated on sample images and shown to achieve better PSNR and lower MSE than frequency domain techniques alone. In conclusion, combining frequency and spatial domain methods provides an effective approach for image enhancement.
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
SINGLE IMAGE SUPER RESOLUTION IN SPATIAL AND WAVELET DOMAINijma
This document presents a single image super resolution algorithm that uses both spatial and wavelet domains. The algorithm takes advantage of both domains by upsampling the image in the spatial domain using bicubic interpolation, then refining the high frequency subbands in the wavelet domain. It is iterative, using back projection to minimize reconstruction error between the original low resolution image and the downsampled output. Wavelet-based denoising is applied to the high frequency subband to remove noise before reconstruction. Experimental results on various test images show the proposed algorithm achieves similar or better PSNR than other compared methods.
Face Alignment Using Active Shape Model And Support Vector MachineCSCJournals
The document proposes improvements to the classical Active Shape Model (ASM) algorithm for face alignment. The improvements include: 1) Combining Sobel filtering and 2D profiles to build texture models, 2) Applying Canny edge detection for enhancement, 3) Using Support Vector Machines (SVM) to classify landmarks for more accurate localization, and 4) Automatically adjusting 2D profile lengths based on image size. Experimental results on two face databases show the proposed ASM-SVM method achieves better alignment accuracy than the classical ASM and other methods.
The document summarizes and compares three recent face alignment algorithms: supervised descent method (SDM), local binary features regression (LBF), and face alignment under poses and occlusion (HPO). SDM formulates alignment as regression of local image features to landmark movements. LBF learns local binary features from regions around landmarks and uses global regression. HPO predicts landmark visibility and learns descent directions considering occlusion. The document evaluates the methods on standard datasets, finding SDM and LBF perform well except under extreme conditions, while HPO better handles poses and occlusion.
Image Registration using NSCT and Invariant MomentCSCJournals
Image registration is a process of matching images, which are taken at different times, from different sensors or from different view points. It is an important step for a great variety of applications such as computer vision, stereo navigation, medical image analysis, pattern recognition and watermarking applications. In this paper an improved feature point selection and matching technique for image registration is proposed. This technique is based on the ability of nonsubsampled contourlet transform (NSCT) to extract significant features irrespective of feature orientation. Then the correspondence between the extracted feature points of reference image and sensed image is achieved using Zernike moments. Feature point pairs are used for estimating the transformation parameters mapping the sensed image to the reference image. Experimental results illustrate the registration accuracy over a wide range for panning and zooming movement and also the robustness of the proposed algorithm to noise. Apart from image registration proposed method can be used for shape matching and object classification. Keywords: Image Registration, NSCT, Contourlet Transform, Zernike Moment.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes a computer vision project that aims to allow a camera fixed to a drone to determine its position relative to a pipe. The method uses images of a pipe covered in a known pattern to extract the camera's orientation. Key steps include binarizing images, detecting pattern dots, calculating 3D coordinates, and using EPnP to retrieve the camera pose from 2D-3D correspondences. The project achieves accurate pose estimation but has limitations such as not distinguishing pattern orientation. Future work could involve a modified pattern to address these limitations.
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document summarizes a research paper that proposes a modification to the traditional carry select adder (CSLA) circuit to reduce its area and power consumption. The modified CSLA replaces the full adders used for carry inputs of 1 with smaller and less complex binary excess-1 converters (BEC). Evaluation shows the proposed design has lower area (reduced logic gates) and power usage than a regular CSLA, with only a small increase in delay. Simulation results confirm the modified CSLA achieves area and power reductions compared to the traditional CSLA structure.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
This document proposes a human skin color detection method using beta mixture models (BMM) in RGB color space. It introduces BMMs as a way to model pixel value distributions with compact ranges like skin color pixels in RGB space. It then describes using variational inference with a factorized approximation to obtain the posterior distribution of the BMM parameters given training image data. The method is evaluated on a standard skin detection database and shown to perform better than other methods.
Analysis of Image Super-Resolution via Reconstruction Filters for Pure Transl...CSCJournals
In this work, a special case of the image super-resolution problem where the only type of motion is global translational motion and the blurs are shift-invariant is investigated. The necessary conditions for exact reconstruction of the original image by using finite impulse-response reconstruction filters are investigated and determined. If the number of available low-resolution images is larger than a threshold and the blur functions meet a certain property, a reconstruction filter set for perfect image super-resolution can be generated even in the absence of motion. Given that the conditions are satisfied, a method for exact super-resolution is presented to validate the analysis results and it is shown that for the fully determined case, perfect reconstruction of the original image is achieved. Finally, some realistic conditions that make the super-resolution problem ill-posed are treated and their effects on exact super-resolution are discussed.
Image processing techniques applied for pitting corrosion analysiseSAT Journals
Abstract
In order to study the behavior of the early stage of pitting corrosion, an image analysis based on discrete wavelet packet transform
and fractals was used. Image feature parameters were extracted and analyzed to characterize the pitting corrosion development with
test time. It was found that the feature parameters: Shannon entropy, energy, fractal dimension and intercept increased with the test
time. Therefore the image processing techniques were promising and effective tools to analyze and detect the pitting corrosion.
Keywords: corrosion, pitting corrosion, surface topography, surface analysis, carbon steel, tap water
A Trained CNN Based Resolution Enhancement of Digital ImagesIJMTST Journal
Image Resolution Enhancement (RE) is a technique to estimate or synthesize a high resolution(HR) image
from one or several low resolution (LR) images . Resolution Enhancement (RE) technique reconstructs a
higher-resolution image or sequence from the observed LR images. In this project we are going to present
about the methods in resolution enhancement and the advancements that are taking place, since it has lot
many applications in various fields. Most resolution enhancement techniques are based on the same idea,
using information from several different images to create one upsized image. Algorithms try to extract details
from every image in a sequence to reconstruct other frames.
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...CSCJournals
An interferogram filtering is presented in this paper. The main concern of the proposed scheme is to lower the residues count mean while preserving the location and jump height of the lines of phase discontinuity. The proposed method is based on a statistical model of the coefficients of multi-scale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. The performance of this method substantially has the advantages of reducing number of residuals without affecting line of height discontinuity.
El documento describe la emergencia del e-learning y las universidades virtuales debido a la necesidad de formación continua a lo largo de la vida. Establece un continuo entre la virtualidad como complemento a la presencialidad y la virtualidad total, y enumera seis modelos posibles de instituciones de educación superior dentro de este continuo: 1) universidades presenciales con elementos virtuales, 2) universidades presenciales con extensión virtual, 3) espacios compartidos de cursos virtuales, 4) universidades virtuales adosadas a universidades tradicionales
The document discusses how the filmmaker's media product uses conventions from real action films, such as a one shot down a corridor to build tension, as seen in John Wick and Inception. It also aims to engage audiences by asking questions from the start. While not challenging conventions, the filmmaker executed simple techniques well, such as long still shots, to remain faithful to the original vision despite limitations with equipment.
This document discusses solar energy subsidies in the United States. It provides background on what subsidies are and explains that the solar investment tax credit (ITC) was recently extended by Congress for three more years through 2021. This extension will help stabilize the solar industry and the US economy by continuing to drive solar investment and job growth. The document also considers arguments against solar subsidies and presents counterarguments for why the solar industry deserves continued government support.
Color Image Watermarking using Cycle Spinning based Sharp Frequency Localized...CSCJournals
This paper describes a new approach for color image watermarking using Cycle Spinning based Sharp Frequency Localized Contourlet Transform and Principal Component Analysis. The approach starts with decomposition of images into various subbands using Contourlet Transform(CT) successively for all the color spaces of both host and watermark images. Then principal components of middle band(x bands) are considered for inserting operation. The ordinary contourlet transform suffers from lack of frequency localization. The localization being the most important criterion for watermarking, the conventional CT is not very suitable for watermarking. This problem of CT is over come by Sharp Frequency Localized Contourlet, but this lacks of translation invariance. Hence the cycle spinning based sharp frequency localized contourlet chosen for watermarking. Embedding at middle level sub bands(x band) preserves the curve nature of edges in the host image hence less disturbance is observed when host and watermark images are compared. This result in very good Peak Signal to Noise Ratio (PSNR) instead of directly adding of mid frequency components of watermark and host images the principal components are only added. Likewise the amount of payload to be added is reduced hence host images get very less distortion. Usage of principal components also helps in fruitful extraction of watermark information from host image hence gives good correlation between input watermark and extracted one. This technique has shown a very high robustness under various intentional and non intentional attacks.
This document describes a hybrid technique for image enhancement that uses both frequency domain and spatial domain techniques. It begins with applying frequency domain techniques like discrete cosine transform (DCT) or discrete wavelet transform (DWT) to separate an image into magnitude and phase spectra. The magnitude is then enhanced before recombining it with the phase using inverse DCT/DWT. Spatial domain techniques like power law or log transforms are then applied to further enhance contrast and brightness. The technique is evaluated on sample images and shown to achieve better PSNR and lower MSE than frequency domain techniques alone. In conclusion, combining frequency and spatial domain methods provides an effective approach for image enhancement.
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
SINGLE IMAGE SUPER RESOLUTION IN SPATIAL AND WAVELET DOMAINijma
This document presents a single image super resolution algorithm that uses both spatial and wavelet domains. The algorithm takes advantage of both domains by upsampling the image in the spatial domain using bicubic interpolation, then refining the high frequency subbands in the wavelet domain. It is iterative, using back projection to minimize reconstruction error between the original low resolution image and the downsampled output. Wavelet-based denoising is applied to the high frequency subband to remove noise before reconstruction. Experimental results on various test images show the proposed algorithm achieves similar or better PSNR than other compared methods.
Face Alignment Using Active Shape Model And Support Vector MachineCSCJournals
The document proposes improvements to the classical Active Shape Model (ASM) algorithm for face alignment. The improvements include: 1) Combining Sobel filtering and 2D profiles to build texture models, 2) Applying Canny edge detection for enhancement, 3) Using Support Vector Machines (SVM) to classify landmarks for more accurate localization, and 4) Automatically adjusting 2D profile lengths based on image size. Experimental results on two face databases show the proposed ASM-SVM method achieves better alignment accuracy than the classical ASM and other methods.
The document summarizes and compares three recent face alignment algorithms: supervised descent method (SDM), local binary features regression (LBF), and face alignment under poses and occlusion (HPO). SDM formulates alignment as regression of local image features to landmark movements. LBF learns local binary features from regions around landmarks and uses global regression. HPO predicts landmark visibility and learns descent directions considering occlusion. The document evaluates the methods on standard datasets, finding SDM and LBF perform well except under extreme conditions, while HPO better handles poses and occlusion.
Image Registration using NSCT and Invariant MomentCSCJournals
Image registration is a process of matching images, which are taken at different times, from different sensors or from different view points. It is an important step for a great variety of applications such as computer vision, stereo navigation, medical image analysis, pattern recognition and watermarking applications. In this paper an improved feature point selection and matching technique for image registration is proposed. This technique is based on the ability of nonsubsampled contourlet transform (NSCT) to extract significant features irrespective of feature orientation. Then the correspondence between the extracted feature points of reference image and sensed image is achieved using Zernike moments. Feature point pairs are used for estimating the transformation parameters mapping the sensed image to the reference image. Experimental results illustrate the registration accuracy over a wide range for panning and zooming movement and also the robustness of the proposed algorithm to noise. Apart from image registration proposed method can be used for shape matching and object classification. Keywords: Image Registration, NSCT, Contourlet Transform, Zernike Moment.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes a computer vision project that aims to allow a camera fixed to a drone to determine its position relative to a pipe. The method uses images of a pipe covered in a known pattern to extract the camera's orientation. Key steps include binarizing images, detecting pattern dots, calculating 3D coordinates, and using EPnP to retrieve the camera pose from 2D-3D correspondences. The project achieves accurate pose estimation but has limitations such as not distinguishing pattern orientation. Future work could involve a modified pattern to address these limitations.
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document summarizes a research paper that proposes a modification to the traditional carry select adder (CSLA) circuit to reduce its area and power consumption. The modified CSLA replaces the full adders used for carry inputs of 1 with smaller and less complex binary excess-1 converters (BEC). Evaluation shows the proposed design has lower area (reduced logic gates) and power usage than a regular CSLA, with only a small increase in delay. Simulation results confirm the modified CSLA achieves area and power reductions compared to the traditional CSLA structure.
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...ijsrd.com
In past two decades there are various techniques are developed to support variety of image processing applications. The applications of image processing include medical, satellite, space, transmission and storage, radar and sonar etc. But noise in image effect all applications. So it is necessary to remove noise from image. There are various methods and techniques are there to remove noise from images. Wavelet transform (WT) has been proved to be effective in noise removal but this have some problems that is overcome by PCA method. This paper presents an efficient image de-noising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). This method provides better preservation of image local structures. In this method a pixel and its nearest neighbors are modeled as a vector variable whose training samples are selected from the local window by using block matching based LPG. In image de-noising, a compromise has to be found between noise reduction and preserving significant image details. PCA is a statistical technique for simplifying a dataset by reducing datasets to lower dimensions. It is a standard technique commonly used for data reduction in statistical pattern recognition and signal processing. This paper proposes a de-noising technique by using a new statistical approach, principal component analysis with local pixel grouping (LPG). This procedure is iterated second time to further improve the de-noising performance, and the noise level is adaptively adjusted in the second stage.
This document proposes a human skin color detection method using beta mixture models (BMM) in RGB color space. It introduces BMMs as a way to model pixel value distributions with compact ranges like skin color pixels in RGB space. It then describes using variational inference with a factorized approximation to obtain the posterior distribution of the BMM parameters given training image data. The method is evaluated on a standard skin detection database and shown to perform better than other methods.
Analysis of Image Super-Resolution via Reconstruction Filters for Pure Transl...CSCJournals
In this work, a special case of the image super-resolution problem where the only type of motion is global translational motion and the blurs are shift-invariant is investigated. The necessary conditions for exact reconstruction of the original image by using finite impulse-response reconstruction filters are investigated and determined. If the number of available low-resolution images is larger than a threshold and the blur functions meet a certain property, a reconstruction filter set for perfect image super-resolution can be generated even in the absence of motion. Given that the conditions are satisfied, a method for exact super-resolution is presented to validate the analysis results and it is shown that for the fully determined case, perfect reconstruction of the original image is achieved. Finally, some realistic conditions that make the super-resolution problem ill-posed are treated and their effects on exact super-resolution are discussed.
Image processing techniques applied for pitting corrosion analysiseSAT Journals
Abstract
In order to study the behavior of the early stage of pitting corrosion, an image analysis based on discrete wavelet packet transform
and fractals was used. Image feature parameters were extracted and analyzed to characterize the pitting corrosion development with
test time. It was found that the feature parameters: Shannon entropy, energy, fractal dimension and intercept increased with the test
time. Therefore the image processing techniques were promising and effective tools to analyze and detect the pitting corrosion.
Keywords: corrosion, pitting corrosion, surface topography, surface analysis, carbon steel, tap water
A Trained CNN Based Resolution Enhancement of Digital ImagesIJMTST Journal
Image Resolution Enhancement (RE) is a technique to estimate or synthesize a high resolution(HR) image
from one or several low resolution (LR) images . Resolution Enhancement (RE) technique reconstructs a
higher-resolution image or sequence from the observed LR images. In this project we are going to present
about the methods in resolution enhancement and the advancements that are taking place, since it has lot
many applications in various fields. Most resolution enhancement techniques are based on the same idea,
using information from several different images to create one upsized image. Algorithms try to extract details
from every image in a sequence to reconstruct other frames.
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...CSCJournals
An interferogram filtering is presented in this paper. The main concern of the proposed scheme is to lower the residues count mean while preserving the location and jump height of the lines of phase discontinuity. The proposed method is based on a statistical model of the coefficients of multi-scale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. The performance of this method substantially has the advantages of reducing number of residuals without affecting line of height discontinuity.
El documento describe la emergencia del e-learning y las universidades virtuales debido a la necesidad de formación continua a lo largo de la vida. Establece un continuo entre la virtualidad como complemento a la presencialidad y la virtualidad total, y enumera seis modelos posibles de instituciones de educación superior dentro de este continuo: 1) universidades presenciales con elementos virtuales, 2) universidades presenciales con extensión virtual, 3) espacios compartidos de cursos virtuales, 4) universidades virtuales adosadas a universidades tradicionales
The document discusses how the filmmaker's media product uses conventions from real action films, such as a one shot down a corridor to build tension, as seen in John Wick and Inception. It also aims to engage audiences by asking questions from the start. While not challenging conventions, the filmmaker executed simple techniques well, such as long still shots, to remain faithful to the original vision despite limitations with equipment.
This document discusses solar energy subsidies in the United States. It provides background on what subsidies are and explains that the solar investment tax credit (ITC) was recently extended by Congress for three more years through 2021. This extension will help stabilize the solar industry and the US economy by continuing to drive solar investment and job growth. The document also considers arguments against solar subsidies and presents counterarguments for why the solar industry deserves continued government support.
The document discusses government subsidies for various energy industries. It notes that subsidies come in various forms like cash payments, tax reductions, and grants. The government spends $9-13 billion annually subsidizing the energy sector, with fossil fuels receiving over 6 times more subsidies than solar energy. However, the level of subsidies can be influenced by political and corporate pressures. While subsidies are meant to stabilize the economy, some argue they provide an unfair advantage to industries that cannot compete without assistance. The future of subsidies, especially for solar, will depend on the policy directions of upcoming elections and administrations.
What are subsidies?
Subsidies are the money the government gives to the public or
corporates for selling essential goods and services at cheaper rates.
Simply put, it is the opposite of taxation. There are two different kinds of subsidies – growth oriented and welfare oriented. Reduction in fuel and food costs is an example of welfare-oriented subsidies. The government also sometimes gives money to companies or farmers for operating in certain industries.
These are examples of growth-oriented subsidies. These subsidies encourage companies to operate in industries that may have high business costs, but are still important for the public and the economy.
The oil marketing industry is the best example of this. These companies sell fuel at cheaper rates, incurring a loss. Yet, fuel plays a very important role in the economy. So, to encourage companies to operate in this environment, the government pays them subsidies to make up for the loss.
Foreign direct investment in BangladeshKonok Mondal
This document provides an overview of key economic indicators and investment opportunities in Bangladesh. It notes that Bangladesh has a growing economy with a GDP of $223.941 billion and expected growth rate of 7.1%. The country attracts foreign direct investment and has competitive incentives like tax exemptions and export processing zones to attract investors. Key sectors for foreign investment include energy, infrastructure, pharmaceuticals, textiles, and others.
Strategy Evaluation of Carnival corporation-plc Konok Mondal
Carnival Corporation is the largest cruise company in the world with over 100 ships. It has several cruise line brands and serves over 8 million passengers annually. An internal assessment found strengths in its size, fuel efficiency efforts, and home port locations, but weaknesses in its safety record and high agent commission costs. Externally, opportunities exist in expanding Asian markets and cruising growth, while threats include fuel costs, safety issues, and tight competition. The document recommends strategies around marketing, entertainment, safety improvements, and environmental initiatives.
Maruti Suzuki is the largest automobile company in India. [1] It was established in 1981 as a collaboration between the Indian government and Suzuki Motor Corporation of Japan. [2] Maruti 800 was Maruti's first car, launched in 1983. [3] It went through different stages of the product lifecycle - introduction from 1983-1986, growth from 1987-1996, maturity from 1997-2002, and decline from 2002 to present. [4] Facing competition from newer small cars, Maruti repositioned the 800 as the Alto 800 in 2012 to try and revive sales. [5] When analyzed using the BCG matrix, Maruti 800 shifted from a star during introduction and growth to a
1) The document discusses the potential for implementing a water transportation system connecting the Hatirjheel, Gulshan, Banani, and Baridhara lakes in Dhaka, Bangladesh.
2) It provides background on the Hatirjheel project which revived one of the lakes, and argues water transportation could provide a cheap public transit option while reducing environmental impacts.
3) However, challenges include coordinating different agencies, managing waste disposal, and potential negative effects on water quality and wildlife from increased usage. The document analyzes this proposal's costs and benefits.
Take a look at what I've gotten up to in digital marketing, web development, and social media management in the past 6 years. With samples of work & contact information.
Two-Channel Quadrature Mirror Filter Bank Design using FIR Polyphase ComponentIDES Editor
This paper proposes a new method for designing a two-channel quadrature mirror filter bank using polyphase components of the low-pass prototype filter. The method formulates an objective function that is minimized using a power optimization approach. The objective function is a linear combination of passband error and stopband residual energy. Simulation results show the proposed method requires less computational time and fewer iterations compared to existing methods. Design examples demonstrate the method produces improved stopband attenuation and reconstruction error performance.
Design and Implementation of Efficient Analysis and Synthesis QMF Bank for Mu...TELKOMNIKA JOURNAL
The present section deals with a new type of technique for designing an efficient two channel Quadrature Mirror Filter Bank with constant phase in frequency. For achieving the Perfect Reconstruction Condition in Filter bank, an attempt has been made to design the low pass prototype filter with its impulse response and frequency response in three regions namely pass band, stop band and transition band region. With the error in terms of Reconstruction and the attenuation in the stop band as seen in the prototype filter response, one can evaluate the performance of the introduced filter with the help of filter coefficients generated in the design examples that affects the quality of filter bank design under the constraints of Near Perfect Reconstruction Conditions.
This document describes the simulation of a quadrature mirror filter (QMF) bank using MATLAB software. It provides background on QMF banks, describing their use in signal processing applications like subband coding. The document then gives details on implementing a two-channel QMF bank in MATLAB, including generating the analysis and synthesis filters and processing a sample input signal through the filter bank. It concludes by showing the output of the MATLAB simulation and citing references.
Adaptive Trilateral Filter for In-Loop Filteringcsandit
High Efficiency Video Coding (HEVC) has achieved si
gnificant coding efficiency improvement
beyond existing video coding standard by employing
several new coding tools. Deblocking
Filter, Sample Adaptive Offset (SAO) and Adaptive L
oop Filter (ALF) for in-loop filtering are
currently introduced for the HEVC standard. However
, these filters are implemented in spatial
domain despite the fact of temporal correlation wit
hin video sequences. To reduce the artifacts
and better align object boundaries in video, a prop
osed algorithm in in-loop filtering is
proposed. The proposed algorithm is implemented in
HM-11.0 software. This proposed
algorithm allows an average bitrate reduction of ab
out 0.7% and improves the PSNR of the
decoded frame by 0.05%, 0.30% and 0.35% in luminanc
e and chroma.
Modified Adaptive Lifting Structure Of CDF 9/7 Wavelet With Spiht For Lossy I...idescitation
We present a modified structure of 2-D cdf 9/7 wavelet
transforms based on adaptive lifting in image coding. Instead
of alternately applying horizontal and vertical lifting, as in
present practice, Adaptive lifting performs lifting-based
prediction in local windows in the direction of high pixel
correlation. Hence, it adapts far better to the image orientation
features in local windows. The predicting and updating signals
of Adaptive lifting can be derived even at the fractional pixel
precision level to achieve high resolution, while still
maintaining perfect reconstruction. To enhance the
performance of adaptive based modified structure of 2-D CDF
9/7 is coupled with SPIHT coding algorithm to improve the
drawbacks of wavelet transform. Experimental results show
that the proposed modified scheme based image coding
technique outperforms JPEG 2000 in both PSNR and visual
quality, with the improvement up to 6.0 dB than existing
structure on images with rich orientation features .
High Efficiency Video Coding (HEVC) has achieved significant coding efficiency improvement beyond
existing video coding standard by employing several new coding tools. Deblocking Filter, Sample Adaptive
Offset (SAO) and Adaptive Loop Filter (ALF) for in-loop filtering are currently introduced for the HEVC
standard. However, these filters are implemented in spatial domain despite the fact of temporal correlation
within video sequences. To reduce the artifacts and better align object boundaries in video, a proposed
algorithm in in-loop filtering is proposed. The proposed algorithm is implemented in HM-11.0 software.
This proposed algorithm allows an average bitrate reduction of about 0.7% and improves the PSNR of the
decoded frame by 0.05%, 0.30% and 0.35% in luminance and chroma.
FPGA Implementation of FIR Filter using Various Algorithms: A RetrospectiveIJORCS
This Paper is a review study of FPGA implementation of Finite Impulse response (FIR) with low cost and high performance. The key observation of this paper is an elaborate analysis about hardware implementations of FIR filters using different algorithm i.e., Distributed Arithmetic (DA), DA-Offset Binary Coding (DA-OBC), Common Sub-expression Elimination (CSE) and sum-of-power-of-two (SOPOT) with less resources and without affecting the performance of the original FIR Filter.
1. The document presents a design for a modified Booth recoder using a fused add-multiply (FAM) operator to implement digital signal processing applications more efficiently.
2. It proposes a new recoding technique to decrease the critical path delay and reduce area and power consumption of the FAM unit compared to existing recoding schemes.
3. The technique is also applied to the implementation of finite impulse response (FIR) filters to further optimize hardware usage and achieve faithfully rounded outputs within tight area and power constraints for mobile applications.
Novel algorithm for color image demosaikcing using laplacian maskeSAT Journals
Abstract Images in any digital camera is formed with the help of a monochrome sensor, which can be either a charge-coupled device(CCD) or complementary metal oxide semi-conductor(CMOS). Interpolation is the base for any demsoaicking process. The input for interpolation is the output of the Bayer Color Filter Array which is a mosaic like lattice structure. Bayer Color Filter Array samples the channel information of R,G and B values separately assigning only one channel component per pixel. To generate a complete color image, three channel values are required. In order to find those missing samples we use interpolation. It is a technique of estimating the missing values from the discrete observed samples scattered over the space. Thus Demosaicking or De-bayering is an algorithm of finding missing values from the mosaic patterned output of the Bayer CFA. Interpolation algorithm results in few artifacts such as zippering effect in the edges. This paper introduces an algorithm for demosaicking which outperforms the existing demosaciking algorithms. The main aim of this algorithm is to accurately estimate the Green component. The standard mechanism to compare the performance is PSNR(Peak Signal to Noise Ratio) and the image dataset for comparison was Kodak image dataset. The algorithm was implemented using Matlab2009B version. Keywords: Demosaicking, Interpolation, Bayer CFA, Laplacian Mask, Correlation.
Concurrent Ternary Galois-based Computation using Nano-apex Multiplexing Nibs...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their corresponding carbon-based field emission controlled switching is introduced in this article. The formalistic ternary nano-based implementation utilizes recent findings in field emission and nano applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via field-emission methods. The presented work implements multi-valued Galois functions by utilizing concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first part of the article. The introduced computational extension utilizing many-to-one carbon field-emission devices will be further utilized in implementing congestion-free architectures within the third part of the article. The emerging nano-based technologies form important directions in low-power compact-size regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using much less power than silicon-based devices. Applications include low-power design of VLSI circuits for signal processing and control of autonomous robots.
1) The document describes the design of a compensation filter for a Generalized Comb Filter (GCF) using a Maximally-Flat minimization technique.
2) The coefficients of the proposed compensation filter are obtained by solving two linear equations to minimize passband droop.
3) The compensation filter operates at a lower rate than the GCF and significantly reduces passband droop when cascaded with the GCF.
Design of Digital Filter and Filter Bank using IFIRIRJET Journal
This document discusses the design of digital filters and filter banks using interpolated infinite impulse response (IFIR) filters. IFIR filters can reduce computational complexity compared to traditional FIR filters while maintaining linear phase properties. The document presents the mathematical analysis and design process for single-stage and two-stage IFIR prototype filters. Uniform M-channel filter banks are designed using these IFIR prototype filters. Simulation results show that two-stage IFIR filters provide up to 82% reduction in computational cost compared to single-stage IFIR filters for filter bank design. IFIR filters are concluded to be an efficient approach for designing nearly perfect reconstruction filter banks with significant computational savings.
A Low Hardware Complex Bilinear Interpolation Algorithm of Image Scaling for ...arpublication
In this brief, a low-complexity, low-memoryrequirement, and high-quality algorithm is proposed for VLSI implementation of an image scaling processor. The proposed image scaling algorithm consists of a sharpening spatial filter, a clamp filter, and a bilinear interpolation. To reduce the blurring and aliasing artifacts produced by the bilinear interpolation, the sharpening spatial and clamp filters are added as pre-filters. To minimize the memory buffers and computing resources for the proposed image processor design, a T-model and inversed T-model convolution kernels are created for realizing the sharpening spatial and clamp filters. Furthermore, two T-model or inversed T-model filters are combined into a combined filter which requires only a one-line-buffer memory. Moreover, a reconfigurable calculation unit is invented for decreasing the hardware cost of the combined filter. Moreover, the computing resource and hardware cost of the bilinear interpolator can be efficiently reduced by an algebraic manipulation and hardware sharing techniques. The VLSI architecture in this work can achieve 280 MHz with 6.08-K gate counts, and its core area is 30 378 μm2 synthesized by a 0.13-μm CMOS process. Compared with previous low-complexity techniques, this work reduces gate counts by more than 34.4% and requires only a one-line-buffer memory.
IRJET- Efficient JPEG Reconstruction using Bayesian MAP and BFMTIRJET Journal
This document discusses efficient JPEG reconstruction using Bayesian MAP and BFMT. It proposes using a Bayesian maximum a posteriori probability approach with an alternating direction method of multipliers iterative optimization algorithm. Specifically, it uses a learned frame prior and models the quantization noise as Gaussian. It also proposes using bilateral filter and its method noise thresholding using wavelets for image denoising as part of the JPEG reconstruction process. Experimental results show this approach improves reconstruction quality both visually and in terms of signal-to-noise ratio compared to other existing methods.
Log-domain electronically-tuneable fully differential high order multi-functi...IJECEIAES
This paper presents the synthesis of fully deferential circuit that is capable of performing simultaneous high-pass, low-pass, and band-pass filtering in the log domain. The circuit utilizes modified Seevinck’s integrators in the current mode. The transfer function describing the filter is first presented in the form of a canonical signal flow graph through applying Mason’s gain formula. The resulting signal flow graph consists of summing points and pick-off points associated with current mode integrators within unity-gain negative feedback loops. The summing points and the pick-off points are then synthesized as simple nodes and current mirrors, respectively. A new fully differential current-mode integrator circuit is proposed to realize the integration operation. The proposed integrator uses grounded capacitors with no resistors and can be adjusted to work as either lossless or lossy integrator via tuneable current sources. The gain and the cutoff frequency of the integrator are adjustable via biasing currents. Detailed design and simulation results of an example of a 5th order filter circuit is presented. The proposed circuit can perform simultaneously 5th order low-pass filtering, 5th order high-pass filtering, and 4th order band-pass filtering. The simulation is performed using Pspice with practical Infineon BFP649 BJT model. Simulation results show good matching with the target.
NEW BER ANALYSIS OF OFDM SYSTEM OVER NAKAGAMI-n (RICE) FADING CHANNELijcseit
Modern wireless communication systems support high speed multimedia services. These services require
high data rates with acceptable error rates. Orthogonal Frequency Division Multiplexing (OFDM) is a
capable candidate to solve this problem. In this paper, a new expression for the BER of OFDM system has
been derived over Nakagami–n (Rice) fading channels using characteristics function (CHF) approach. The
exact probability density function of first order of Nakagami-n (Rice) random vector is used to derive the
expression for the error rates of OFDM system. The BER derivation of Rician fading channel is slightly
more complex compared to the Nakagami–m distribution because the PDF of the Rician RV contains an
explicit term of a modified Bessel function of first kind. Earlier, this problem was solved by replacing the
Bessel function with its infinite series and exponential integral representation. Here we propose an integral
expression to remove the complexity of the expression.
Design and optimization of a new compact 2.4 GHz-bandpass filter using DGS te...TELKOMNIKA JOURNAL
The objective of this work is the study, the design and the optimization of an innovative structure of a network of coupled copper metal lines deposited on the upper surface of a R04003 type substrate of height 0.813 with a ground deformed by slots (DGS). This structure is designed in an optimal configuration for use in the design of narrowband bandpass filter for wireless communication systems (WLAN), the aim of use the defected ground structure is to remove the unwanted harmonics in the rejection band, the simulation results obtained from this structure using CST software show a very high selectivity of the designed filter, a very low level of losses (less than-0.45 dB) with a size overall size of 43.5x34.3 mm.
Spatialization Parameter Estimation in MDCT Domain for Stereo AudioCSCJournals
For representing multi-channel audio at low bit rate parametric coding techniques are used in many audio coding standards. An MDCT domain parametric stereo coding algorithm which represents the stereo channels as the linear combination of the ‘sum’ channel derived from the stereo channels and a reverberated channel generated from the ‘sum ’channel has been reported in literature. This model is inefficient in capturing the stereo image since only four parameters per sub-band is used as spatialization parameters. In this work we improve this MDCT domain parametric coder with an augmented parameter extraction scheme using an additional reverberated channel. We further modify the scheme by using orthogonalized de-correlated channels for analysis and synthesis of parametric stereo. A synthesis scheme with perceptually scaled parameter set is also introduced. Finally we present, subjective evaluation of the different parametric stereo schemes using MUSHRA test and the increased the perceptual audio quality of the synthesized signals are evident from these test results.
A high performance fir filter architecture for fixed and reconfigurable appli...Ieee Xpert
A high performance fir filter architecture for fixed and reconfigurable applications A high performance fir filter architecture for fixed and reconfigurable applications A high performance fir filter architecture for fixed and reconfigurable applications A high performance fir filter architecture for fixed and reconfigurable applications
This document proposes an algorithm for efficiently computing 2D spatial convolution through image partitioning and short convolution. The algorithm partitions an input image into overlapping 6x6 blocks, which are then further partitioned into non-overlapping 3x3 sub-images. Convolution is computed for each sub-image independently using a variable-length filter, reducing computational complexity compared to FFT-based techniques. The outputs from each sub-image convolution are combined to reconstruct the original block. Simulation results demonstrate the effectiveness of the algorithm for tasks like edge detection and noise reduction through local image filtering.
Similar to Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Filter Banks for Image Coding (20)
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Filter Banks for Image Coding
1. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 46
Unconstrained Optimization Method to Design Two Channel
Quadrature Mirror Filter Banks for Image Coding
Anamika Jain ajain_bpl@yahoo.in
Department of Electronics and
Communication Engineering
M.A.N.I.T., Bhopal, 462051, INDIA
Aditya Goel adityagoel2@rediffmail.com
Department of Electronics and
Communication Engineering
M.A.N.I.T., Bhopal, 462051, INDIA
Abstract
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter
(QMF) bank for subband image coding. The choice of filter bank is important as it affects image
quality as well as system design complexity. The design problem is formulated as weighted sum
of reconstruction error in time domain and passband and stop-band energy of the low-pass
analysis filter of the filter bank .The objective function is minimized directly, using nonlinear
unconstrained method. Experimental results of the method on images show that the performance
of the proposed method is better than that of the already existing methods. The impact of some
filter characteristics, such as stopband attenuation, stopband edge, and filter length on the
performance of the reconstructed images is also investigated.
Keywords: Sub-Band Coding, MSE (mean square error), Perfect Reconstruction, PSNR(Peak
Signal to Noise Ratio); Quadrature Mirror filter).
1. INTRODUCTION
Quadrature mirror filter (QMF) banks have been widely used in signal processing fields, such as
sub-band coding of speech and image signals [1–4], speech and image compression [5,6],
transmultiplexers, equalization of wireless communication channels, source coding for audio and
video signals , design of wavelet bases [7], sub-band acoustic echo cancellation , and discrete
multitone modulation systems. In the design of QMF banks, it is required that the perfect,
reconstruction condition be achieved and the intra-band aliasing be eliminated or minimized.
Design methods [8,9] developed so far involve minimizing an error function directly in the
frequency domain or time domain to achieve the design requirements. In the conventional QMF
design techniques [10]-[19] to get minimum point analytically, the objective function, is evaluated
by discretization, or iterative least squares methods are used which are based on the
linearization of the error function to, modify the objective function. Thus, the performance of the
QMF bank designed degrades as the solution obtained is the minimization of the discretized
version of the objective function rather than the objective function itself, or computational
complexity increased.
In this paper a nonlinear optimization method is proposed for the design of two-channel QMF
bank. The perfect reconstruction condition is formulated in the time domain to reduce
computation complexity and the objective function is evaluated directly [12-19].Various design
techniques including optimization based [20], and non optimization based techniques have been
reported in literature for the design of QMF bank. In optimization based technique, the design
problem is formulated either as multi-objective or single objective nonlinear optimization problem,
which is solved by various existing methods such as least square technique, weighted least
square (WLS) technique [14-17] and genetic algorithm [21]. In early stage of research, the design
2. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 47
methods developed were based on direct minimization of error function in frequency domain [8].
But due to high degree of nonlinearity and complex optimization technique, these methods were
not suitable for the filter with larger taps. Therefore, Jain and Crochiere [9] have introduced the
concept of iterative algorithm and formulated the design problem in quadratic form in time
domain. Thereafter, several new iterative algorithms [10, 12-21] have been developed either in
time domain or frequency domain. Unfortunately, these techniques are complicated, and are only
applicable to the two-band QMF banks that have low orders. Xu et al [10,13,17] has proposed
some iterative methods in which, the perfect reconstruction condition is formulated in time domain
for reducing computational complexity in the design. For some application, it is required that the
reconstruction error shows equiripple behaviour, and the stopband energies of filters are to be
kept at minimum value. To solve these problems, a two-step approach for the design of two-
channel filter banks was developed. But the approach results in nonlinear phase, and is not
suitable for the wideband audio signal. Therefore, a modified method for the design of QMF
banks using nonlinear optimization has developed in which prototype filter coefficients are
optimized to minimize the combination of reconstruction error, passband and stopband and
residual energy.
FIGURE1: Quadrature Mirror Filter Bank
A typical two-channel QMF bank shown in Figure 1, splits the input signal x(n) into two subband
signals having equal band width, using the low-pass and high-pass analysis filters H0(z) and
H1(z), respectively. These subband signals are down sampled by a factor of two to achieve signal
compression or to reduce processing complexity. At the output end, the two subband signals are
interpolated by a factor of two and passed through lowpass and highpass synthesis filters, F0(z)
and F1(z), respectively. The outputs of the synthesis filters are combined to obtain the
reconstructed signal ˆx (n). The reconstructed signal ˆx (n) is different from the input signal x (n)
due to three errors: aliasing distortion (ALD), amplitude distortion (AMD), and phase distortion
(PHD). While designing filters for the QMF bank, the main stress of most of the researchers has
been on the elimination or minimization of the three distortions to obtain a perfect reconstruction
(PR) or nearly perfect reconstruction (NPR) system. In several design methods reported [17–23],
aliasing has been cancelled completely by selecting the synthesis filters cleverly in terms of the
analysis filters and the PHD has been eliminated using the linear phase FIR filters. The overall
transfer function of such an alias and phase distortion free system turns out to be a function of the
filter tap coefficients of the lowpass analysis filter only, as the highpass and lowpass analysis
filters are related to each other by the mirror image symmetry condition around the quadrature
frequency π/2. Therefore, the AMD can be minimized by optimizing the filter tap weights of the
lowpass analysis filter. If the characteristics of the lowpass analysis filter are assumed to be ideal
in its passband and stopband regions, the PR condition of the alias and phase distortion free
QMF bank is automatically satisfied in these regions, but not in the transition band. The objective
function to be minimized is a linear combination of the reconstruction error in time domain and
passband and stopband residual energy of the lowpass analysis filter of the filter bank .A
nonlinear unconstrained optimization method [20] has been used to minimize the objective
function by optimizing the coefficients of the lowpass analysis filter. A comparison of the design
results of the proposed method with that of the already existing methods shows that this method
is very effective in designing the two channel QMF bank, and gives an improved performance.
The organization of the paper is as follows: in Section 2, a relevant brief analysis of the QMF
bank is given. Section 3 describes the formulation of the design problem to obtain the objective
Analysis Section Synthesis section
x(n) ˆx(n)H0(z) 2 F0(z)2
H1(z) 2 F1(z)2
3. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 48
function. A mathematical formulation to minimize the objective function by using unconstrained
optimization method has been explained in Section 4 and Section 5 presents the proposed
design algorithm. In Section 6, two design examples (cases) are presented to illustrate the
proposed design algorithm. Finally, an application of the proposed method in the area of subband
coding of images is explained. A comparison of the simulation results of the proposed algorithm
with that of the already existing methods is also discussed.
2. ANALYSIS OF THE TWO-CHANNEL QMF BANK
The z-transform of the output signal ˆx (n), of the two channel QMF bank, can be written as [18–
20, 23]
( )ˆX z = ½[H0(z)F0(z) +H1(z)F1(z)]X(z) + ½[H0(−z)F0(z) +H1(−z)F1(z)]X(−z). (1)
Aliasing can be removed completely by defining the synthesis filters as given below [1, 20–23]
F0 (z) = H1 (−z) and F1 (z) =−H0 (−z). (2)
Therefore, using Eq. (2) and the relationship H1 (z) = H0 (−z) between the mirror image filters, the
expression for the alias free reconstructed signal can be written as
( )ˆX z = ½[H0(z)H1(−z) − H1(z)H0(−z)]X(z) = ½[
2
0H (z)−
2
0H (−z)]X(z) (3)
or
( )ˆX z = T (z) X (z), (4)
where T (z) is the overall system function of the alias free QMF bank, and is given by
T (z) = ½[
2
0H (z) −
2
0H (−z)] (5)
To obtain perfect reconstruction, AMD (amplitude distortion) and PHD (phase distortion) should
also be eliminated, which can be done if the reconstructed signal ˆx (n) is simply made equal to a
scaled and delayed version of the input signal x(n). In that situation the overall system function,
must be equal to:
T (z) =
( 1)N
cz− −
(6)
where (N-1) is reconstruction delay .The perfect reconstruction condition in time-domain
can be expressed by using the convolution matrices as [20]
0
1 2 1 /2 /2 1
1 2
0 0
0 0 0 0
0 0 0 0
0 0 0 0
[ , ,..., ]
[ , ,..., ]
(1) (0) 0 0
(3) (2) (1) (0) 0
2
( 1) ( 2) ( 3) (0)
[ (0), (1),... ( / 2 1)]
[0,0,...1]
N N N N
N
T
T
Bh m
B d d d d d d
D d d d
h h
h h h h
h N h N h N h
h h h h N
m
− +
=
= + + +
=
=
− − −
= −
=
K
M M M M M
L
(7)
Where h0(n), for n = 0, 1, 2. . . N/2-1, is the impulse response of filter H0. To satisfy the linear
phase FIR property, the impulse response h0(n) of the lowpass analysis filter can be assumed to
be symmetric. Therefore,
0
0
(N -1 -n). 0 n N-1
h (n) =
0, 0 and n N
h
n
≤ ≤
< ≥ (8)
For real h0(n), ( )RH ω amplitude function is an even function of ω. Hence, by substituting Eqn.
(8) into Eqn. (5), the overall frequency response of the QMF bank can be written as:
4. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 49
2 2( 1) ( 1) ( )
0 0T(e ) =(e / 2)[ ( ) ( 1) ( ) ]j j N j N j
H e H eω ω ω π ω− − − −
− −
(9)
If the filter length, N, is odd, above equation gives T(e
jω
) = 0 at ω = π/2, implying severe
amplitude distortion. In order to cancel the aliasing completely, the synthesis filters are related to
the analysis filters by Eqn. (2) and H1 (z) = H0(−z). It means that the overall design task reduces
to the determination of the filter tap coefficients of the linear phase FIR low-pass analysis filter H0
(z) only, subject to the perfect reconstruction condition of Eqn. (7). Therefore, we propose to
minimize the following objective function for the design of the QMF bank, by optimizing the filter
tap weights of the lowpass filter H0(z)
1 p 2 s rE E Eα α βΦ = + +
(10)
where 1 2, ,α α β are real constants, ,p sE E are the measures of passband and stopband error of
the filter H0(z), and rE is the square error of the overall transfer function of the QMF bank in time
domain, respectively.
The square error rE is given by
rE
= 0 0( ) ( )T
Bh m Bh m− − (11)
3. PROBLEM FORMULATION
3.1. PASS-BAND ERROR
For even N, the frequency response of the lowpass filter H0(z) is given by
( /2 1)
( 1)/2
0 0
0
H (e )=e 2 ( )cos( ( 1) / 2 )
N
j j N
n
h n N nω ω
ω
−
− −
=
− −∑
(12)
=
( /2 1)
( 1)/2
0
e ( )cos( ( 1) / 2 )
N
j N
n
b n N nω
ω
−
− −
=
− −∑
(13)
=
( 1)/2
e ( )j N
RHω
ω− −
(14)
Where
0( ) 2 ( )b n h n=
(15)
and HR(ω) is the magnitude function defined as
HR(ω) = b
T
c. (16)
Vectors b and c are
T
b=[b(0) b(1) b(2)...b(N/2-1)]
c =[cos ((N -1)/2) cos ((N -1)/2-1) ... cos( /2)]T
ω ω ω
Mean square error in the passband may be taken as
0
(1/ ) (1 )(1 )
p T T
pE b c c bd
ω
π ω= − −∫ (17)
=
T
b Qb (18)
where matrix Qis
0
(1/ ) (1 )(1 )
p T
Q c c d
ω
π ω= − −∫ (19)
With (m, n)
th
element given by
0
(1/ ) [(1 cos ( 1) / 2 )(1 cos ( 1) / 2 )]
p
mnq N m N n d
ω
π ω ω ω= − − − − − −∫ (20)
5. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 50
3.2 STOPBAND ERROR
Mean square error in the stopband may be taken as
(1/ )
s
T T
sE b cc bd
π
ω
π ω= ∫ (21)
=
T
b Pb (22)
where matrix P is
(1/ )
s
T
P cc d
π
ω
π ω= ∫ (23)
With (m, n)
th
element given by
(1/ ) (cos ( 1) / 2 )(cos ( 1) / 2 )
s
mnp N m N n d
π
ω
π ω ω ω= − − − −∫ (24)
3.3 THE RECONSTRUCTION SQUARE ERROR
The square error rE is given by
rE
= 0 0( ) ( )T
Bh m Bh m− − (25)
is used to approximate the prefect reconstruction condition in the time-domain in which B and m
are all defined as in eqn.(7).
• Minimization Of The Objective Function
Using Eqs. (19), (23), and (26), the objective function Φ given by eqn. (11) can be written as
1 2
T T
rb Qb b Pb Eα α βΦ = + +
1 0 0 2 0 04 4T T
rh Qh h Ph Eα α βΦ = + +
(26)
It is in quadratic form without any constraints. Therefore the design problem is reduced to
unconstrained optimization of the objective function as given in eqn. (26).
4. THE DESIGN ALGORITHM
In the designs proposed by Jain–Crochiere [9], and Swaminathan–Vaidyanathan [26], the unit
energy constraint on the filter coefficients was also imposed. In the algorithm presented here, the
unit energy constraint is imposed within some prespecified limit. The design algorithm proceeds
through the following steps:
(1) Assume initial values of 1 2, , ,α α β ϵ1,ωp,ωs, and N.
(2) Start with an initial vector 0 0 0 0 0[ (0) (1) (2) . . . (( / 2) 1)]T
h h h h N= −h ; satisfying the unit
energy constraint within a prespecified tolerance, i.e.
/2 1
2
0 1
0
1 2 ( )
N
k
u h k
−
=
= − <∑ ò (27)
(3) Set the function tolerance, convergence tolerance .
(4) Optimize objective function eqn. (26) using unconstrained optimization method for the
specified tolerance.
(5) Evaluate all the component filters of QMF bank using h0.
The performance of the proposed filter and filter bank is evaluated in terms of the following
significant parameters:
Mean square error in the passband
2
0 0
0
(1/ ) [ (0) ( ) ]
p
pE H H f d
ω
π ω= −∫ (28)
Mean square error in the stopband
6. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 51
2
0 ( )
s
sE H d
π
ω
ω ω= ∫ (29)
stopband edge attenuation
10 020log ( ( ))s sA H ω= −
(30)
Measure of ripple
10 10( ) max 10log ( ) min 10log ( )T T
ωω
ω ω∈ = −
(31)
5. EXPERIMENTAL RESULTS AND DISCUSSION
The proposed technique for the design of QMF bank has been implemented in MATLAB. Two
design cases are presented to illustrate the effectiveness of the proposed algorithm. The method
starts by initializing value of the filter coefficients h0(n) to zero for all n, except that h0(N/2−1) =
h0(N/2) = 2−1/2
, 0 1n N≤ ≤ − and using half of its coefficients which then be solved using
unconstrained optimization (interior reflective Newton method) problem. With this choice of the
initial value of the filter coefficients, the unit energy constraint is satisfied.
In both designs, stopband first lobe attenuation (As) has been obtained and the
constants 1 2, ,α α β , ϵ1 have been selected by trial and error method to obtain the best possible
results. The parameters used in the two designs, which will be referred to as Cases 1 and 2, are
N =24, 1α =.01, 2α =0.1, β =.00086,functiontolerance=1e-6,Xtolerance=1e-8, ϵ1 = 0.15, (IT) =38 ωp =
0.4π, ωs = 0.6π, tau = 0.5, and N = 32, 1α =.1, 2α =0.1, β =.00086,functiontolerance=1e-6,
Xtolerance=1e-8, ϵ1 = 1e-8,ωp = 0.4π, ωs = 0.6π,tau = 0.5, respectively. For comparison purposes
the method of Chen and Lee [8] was applied to design the QMF banks with parameters specified
above and using the same initial h, as by the proposed method to both the design examples
(cases) respectively. The comparisons are made in terms of phase response, passband energy,
stopband energy, stopband attenuation and peak ripple( )∈ .
Case 1
For N = 24, ωp = 0.4π, ωs = 0.6π, α1 = .1, α2 =.1, ϵ1 = 0.15, β = 1, the following filter coefficients
for (0 / 2 1n N≤ ≤ − ) are obtained
h0 (n) = [-0.0087 -0.0119 0.0094 0.0221 -0.0123 -0.0332 0.0235 0.0540 -0.0463
-0.0970 0.1356 0.4623]
The corresponding normalized magnitude plots of the analysis filters H0(z) and H1(z) are shown in
Figure 2a & 2c. Figure 2e shows the reconstruction error of the QMF bank (in dB). The significant
parameters obtained are: Ep = .1438, Es = 1.042×10−6, As = 45.831 dB and ( )∈ = 0.9655.
Case 2
For N = 32, ωp = 0.4π, ωs = 0.6π, α1 = 0.1, α2 = 0.1, ε1 = 0.15, β = 0.00086, the following filter
coefficients for (0 / 2 1n N≤ ≤ − ) are obtained
h0 (n) = [-0.0034 -0.0061 0.0020 0.0104 -0.0021 -0.0154 0.0050 0.0237 -0.0102
-0.0349 0.0218 0.0549 -0.0460 -0.0987 0.1343 0.4628]
The corresponding normalized magnitude plots of the analysis filters H0 (z) and H1 (z) are shown
in Fig. 2b & 2d. Figure 2f shows the reconstruction error of the QMF bank (in dB). The significant
parameters obtained are: Ep = .0398, Es = 2.69×10−7, As = 53.391 dB, and ( )∈ = 0.27325.
7. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 52
0 0.2 0.4 0.6 0.8
-80
-60
-40
-20
0
Normalized Frequency (×π rad/sample)
Magnitude(dB)(normalized)
Magnitude Response (dB)
0 0.2 0.4 0.6 0.8
-80
-60
-40
-20
0
Normalized Frequency (×π rad/sample)
Magnitude(dB)(normalized)
Magnitude Response (dB)
(a) (b)
0 0.2 0.4 0.6 0.8
-80
-60
-40
-20
0
Normalized Frequency (×π rad/sample)
Magnitude(dB)(normalized)
Magnitude Response (dB)
0 0.2 0.4 0.6 0.8
-80
-60
-40
-20
0
Normalized Frequency (×π rad/sample)
Magnitude(dB)(normalized)
Magnitude Response (dB)
(c) (d)
0 0.5 1
0.7
0.8
0.9
1
1.1
1.2
Normalized frequency(x pi rad/sample)
ReconstructionError,dB
0 0.5 1
0.6
0.8
1
Normalized frequency(x pi rad/sample)
ReconstructionError,dB
(e) (f)
FIGURE 2: (a) Amplitude response of the prototype analysis filters for N = 24. (b) Amplitude response of
the prototype analysis filters for N=32(c) Magnitude response of overall filter bank N=24 (d) Magnitude
response of overall filter bank N=32 (e) Reconstruction error for overall filter bank (N=24) in dB
(f) Reconstruction error for overall filter bank (N=32) in dB
The simulation results of the proposed method are compared with the methods of Jain–Crochiere
design [9],Gradient method [26], Chen–Lee [8], Lu–Xu–Antoniou [10], Xu–Lu–Antoniou [21], [22],
Sahu O.P. and Soni M.K[17], General-purpose [24], and Smith–Barnwell [15], for N = 32, and are
summarized in Table 1. The results indicate that the performance of our proposed method is
8. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 53
much better than all the considered methods in terms of As. The proposed method also gives
improved performance than General Purpose and Smith–Barnwell, methods in terms of Ep, than
Jain–Crochiere, Gradient,Chen–Lee, Lu–Xu–Antoniou, and Xu–Lu–Antoniou methods in terms of
Es, and than General-purpose and Smith–Barnwell methods in terms of linearity of the phase
response.
TABLE 1: Comparison of the proposed method with other existing methods based on
significant parameters for N=32
According to the results obtained, some observations about filter characteristics can be made.
The frequency response of H0 for 16, 24 32 taps prototype filter shown in Figure 2. The effect of
the parameter N is clearly seen on the stopband attenuation and reconstruction error of the QMF
bank from the figure. Hence, longer prototype filter leads to better stopband attenuation, and
better performance. As the maximum overall ripple for QMF bank decreases with increase in the
length of prototype filter upto N=32. It can be noted that as the length increased to 64 there is a
slight dip in the frequency response characteristic of the prototype filter which deteriorates the
overall performance of the QMF bank.
5.1 APPLICATION TO SUBBAND CODING OF IMAGES
In order to assess performance of the linear phase PR QMF banks, the designed filter banks
were applied for the subband coding of 256x256, and 512x512 Cameraman, Mandrill and Lena
images. The criteria of comparison used is objective and subjective performance of the encoded
images. Influence of certain filter characteristics, such as stopband attenuation, maximum overall
ripple, and filter length on the performance of the encoded images is also investigated.
A common measure of encoded image quality is the peak signal-to-noise ratio, which is given as:
1020 log (255 / )PSNR MSE= (32)
where MSE denotes the mean-squared-error
[ ]
2
1 1
1
( , ) ( , )
M N
x y
MSE I x y I x y
MN = =
′= −∑ ∑
(33)
where, M,N are the dimensions of the image and ( , )I x y , ( , )I x y′ the original and reconstructed
image respectively.
Methods Ep Es As (dB) ( )∈ (dB)
Phase response
Jain–Crochiere [9] 2.30× 10−8 1.50×10−6 33 0.015 Linear
Gradient method [26] 2.64× 10−8 3.30×10−6 33.6 0.009 Linear
General purpose [24] 0.155 6.54×10−8 49.2 0.016 Nonlinear
Smith–Barnwell [15] 0.2 1.05×10−6 39 0.019 Nonlinear
Chen–Lee [8] 2.11× 10−8 1.55×10−6 34 0.016 Linear
Lu–Xu–Antoniou [21] 1.50× 10−8 1.54×10−6 35 0.015 Linear
Xu–Lu–Antoniou [22] 3.50× 10−8 5.71×10−6 35 0.031 Linear
Sahu O.P,Soni
M.K[17]
1.45× 10−8 2.76×10−6 33.913 0.0269 Linear
Proposed method .0398 2.69×10−7 53.391 .2732 Linear
9. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 54
In general, for satisfactory reconstruction of original image, MSE must be lower, while PSNR
must be high. The results of encoded Cameraman, Mandrill and Lena images are shown in figure
3. For Cameraman, Mandrill and Lena images, the best filters in sense of rate-distortion
performance are QMF banks with prototype filter length greater than 16. The results obtained
show that linear phase PR QMF banks are quite competitive to the best known biorthogonal filters
for image coding with respect to PSNR performance for Cameraman, Mandrill and Lena images.
PSNR for all three types of images increase considerably for the filter length greater than 16. The
lower length affects cameraman image more as compared to other two images. Making
experiments with QMF banks with the same length prototype filter and different frequency
responses, filters with better stopband attenuation perform better PSNR performance.
Error!
(a) (b) (c) (d)
(e) (f) (g) (h)
(i) (j) (k) (l)
FIGURE 3: Subband coding of images using designed filter of length N (a) original cameraman image
(b) reconstructed cameraman image for N=24 (c) reconstructed cameraman image N=32 (d) reconstructed
cameraman image N=64 (e) original mandrill image (f) reconstructed mandrill image N=24 (g) reconstructed
mandrill image N=32 (h) reconstructed mandrill image N=64 (i) original Lena image (j) reconstructed Lena
image N=24 (k) reconstructed Lena image N=32 (l) reconstructed Lena image N=64
In addition to quantitative PSNR comparison, the reconstructed images were evaluated to assess
the perceptual quality. For QMF banks, the perceptual quality of the image improves with the
increasing length of the prototype filter h0. This is especially obvious for the filter length above 16.
The quality of encoded images obtained with QMF banks are very close to the quality of the
original image. As we have been expecting, in our experiments, the most disturbing visual artifact
was ringing. At lower length, this type of error affect the quality of reconstructed images
10. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 55
significantly. The results shown in the table 2 are for single level decomposition. As the level of
decomposition increases the PSNR for the Cameraman image is reduced from approximately 86
(N=24) to 75. Further, as the length increased greater than 32, complexity increased and for filter
length 64 the images become brighter and the both objective and subjective performance of the
images deteriorates. Thus the filter of length 24 or 32 may be used for satisfactory performance
both in terms of least MSE as well as highest PSNR.
Length
of
prototype
filter
Stop
band
edge
Max
overall
ripple dB
Stop
band
attenuati
on dB
PSNR MSE
Cameram
an
Mandrill Lena Camera
man
Mandrill Lena
8 0.31425 8.90863 34.7831 -12 ------ ------ 1.24e6 --- -----
12 0.309 5.80215 37.1274 52.7497 52.3128 52.8201 0.3452 0.3818 0.3397
24 0.30375 1.0089 45.8315 86.2702 83.2169 87.4151 1.5e-4 3.1e-4 1.17e-4
32 0.3025 0.30769 53.3916 89.8796 88.4642 90.2282 6.6e-5 9.26e-5 6.16e-5
64 ……… 0.11733 -------- 53.2733 50.0073 54.3468 0.3060 0.6492 0.2390
TABLE 2: Performance results of designed QMF banks on image coding
6. CONCLUSIONS
In this paper, a modified technique has been proposed for the design of QMF bank. The
proposed method optimizes the prototype filter response characteristics in passband, stopband
and also the square error of the overall transfer function of the QMF bank. The method has been
developed and simulated with the help of MATLAB and two design cases have been presented to
illustrate the effectiveness of the proposed method. A comparison of the simulation results
indicates that the proposed method gives an overall improved performance than the already
existing methods, as shown in Table 1, and is very effective in designing the quadrature mirror
filter banks.
We have also investigated the use of linear phase PR QMF banks for subband image coding.
Coding experiments conducted on image data indicate that QMF banks are competitive with the
best biorthogonal filters for image coding. The influence of certain filter characteristics on the
performance of the encoded image is also analysed. It has been verified that filters with better
stopband attenuation perform better rate-distortion performance. Ringing effects can be avoided
by compromising between the stopband attenuation and filter length. Experimental results show
that 24 and 32 taps filter is the best choice in sense of objective and subjective performances.
7. REFERENCES
1. D. Esteban, C. Galand. “Application of quadrature mirror filter to split band voice coding schemes”.
Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ASSP), pp.
191–195, 1977.
2. R.E. Crochiere. “Sub-band coding”. Bell Syst. Tech. Journal, 9 , pp.1633–1654,1981.
3. M.J.T. Smith, S.L. Eddins. “Analysis/synthesis techniques for sub-band image coding”. IEEE Trans.
Acoust. Speech Signal Process: ASSP-38, pp.1446–1456, (8)1990.
4. D.Taskovski, M. Bogdanov, S.Bogdanova. “Application Of Linear-Phase Near-Perfect-
Reconstruction Qmf Banks In Image Compression”. pp. 68-71, 1998.
5. J.W. Woods, S.D. O’Neil. “Sub-band coding of images”. IEEE Trans. Acoust. Speech Signal
Process.: ASSP-34, pp.1278–1288, (10) 1986.
11. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 56
6. T.D. Tran, T.Q. Nguyen. “On M-channel linear phase FIR filter banks and applications in image
compression”. IEEE Trans. Signal Process.45, pp.2175–2187, (9)1997.
7. S.C. Chan, C.K.S. Pun, K.L. Ho, “New design and realization techniques for a class of perfect
reconstruction two-channel FIR filter banks and wavelet bases,” IEEE Trans. Signal Process. 52,
pp.2135–2141, (7) 2004.
8. C.K. Chen, J.H. Lee. “Design of quadrature mirror filters with linear phase in the frequency
domain”. IEEE Trans. Czrcuits Syst., vol. 39, pp. 593-605, (9)1992.
9. V.K. Jain, R.E. Crochiere. “Quadrature mirror filter design in time domain filter design in the time
domain”. IEEE Trans.Acoust., Speech, Signal Processing,, ASSP-32,pp. 353–361,(4)1984.
10. H. Xu, W.S. Lu, A. Antoniou. “An improved method for the design of FIR quadrature mirror image
filter banks”. IEEE Trans. Signal Process.46, pp.1275–1281,(6) 1998.
11. C.K. Goh, Y.C. Lim. “An efficient algorithm to design weighted minimax PR QMF banks”. IEEE
Trans. Signal Process. 47, pp. 3303–3314, 1999.
12. K. Nayebi, T.P. Barnwell III, M.J.T. Smith. “Time domain filter analysis: A new design theory”. IEEE
Trans. Signal Process. 40 ,pp.1412–1428, (6) 1992.
13. P.P. Vaidyanathan. “Multirate Systems and Filter Banks”. Prentice Hall, Englewood Cliffs, NJ,
1993.
14. W.S. Lu, H. Xu, A. Antoniou. “A new method for the design of FIR quadrature mirror-image filter
banks”. IEEE Trans. Circuits Syst. II: Analog Digital Signal Process. 45,pp.922–927,(7) 1998.
15. M.J.T. Smith, T.P. Barnwell III. “Exact reconstruction techniques for tree structured sub-band
coders.” IEEE Trans. Acoust. Speech Signal Process. ASSP-34 ,pp.434–441, (6) 1986.
16. ]J . D. Johnston. “A filter family designed for use in quadrature mirror filter banks”. Proc. IEEE
Trans. Acoust, Speech, Signal Processing, pp. 291-294, Apr. 1980.
17. O. P. Sahu, M. K. Soni and I. M. Talwar,. “Marquardt optimization method to design two-channel
quadrature mirror filter banks”. Digital Signal Processing, vol. 16,No.6, pp. 870-879, Nov. 2006.
18. H. C. Lu, and S. T. Tzeng. “Two channel perfect reconstruction linear phase FIR filter banks for
subband image coding using genetic algorithm approach”. International journal of systems science,
vol.1, No. 1, pp.25-32, 2001.
19. Y. C. Lim, R. H. Yang, S. N. Koh. “The design of weighted minimax quadrature mirror filters”. IEEE
Transactions Signal Processing, vol. 41, No. 5, pp. 1780-1789, May 1993.
20. H. Xu, W. S. Lu, A. Antoniou. “A novel time domain approach for the design of Quadrature mirror
filter banks”. Proceeding of the 37th Midwest symposium on circuit and system, vol.2, No.2, pp.
1032-1035, 1995.
21. H. Xu, W. S. Lu, A. Antoniou. “A new method for the design of FIR QMF banks”. IEEE transaction
on Circuits, system-II: Analog Digital processing, vol. 45, No.7, pp. 922-927, Jul., 1998.
22. H. Xu, W.-S. Lu, A. Antoniou. “An improved method for the design of FIR quadrature mirror-image
filter banks”. IEEE Transactions on signal processing, vol. 46, pp. 1275-1281, May 1998.
23. C. W. Kok, W. C. Siu, Y. M. Law. “Peak constrained least QMF banks”. Journal of signal
processing, vol. 88, pp. 2363-2371, 2008.
24. R. Bregovic, T. Saramaki. “A general-purpose optimization approach for designing two-channel
FIR filter banks”. IEEE Transactions on signal processing, vol.51,No.7, Jul. 2003.
25. C. D. Creusere, S. K. Mitra. “A simple method for designing high quality prototype filters for M-band
pseudo QMF banks”. IEEE Transactions on Signal Processing, vol. 43, pp. 1005-1007, 1995.
12. Anamika Jain & Aditya Goel
International Journal of Image Processing (IJIP), Volume (5) : Issue (1) : 2011 57
26. K.Swaminathan, P.P. Vaidyanathan. “Theory and design of uniform DFT, parallel QMF banks”.
IEEE Trans. Circuits Syst. 33 pp.1170–1191, (12) 1986.
27. A. Kumar, G. K. Singh, R. S. Anand. “Near perfect reconstruction quadrature mirror filter”.
International Journal of Computer Science and Engineering, vol. 2, No.3, pp.121-123, Feb., 2008.
28. A. Kumar, G. K. Singh, R. S. Anand. “A closed form design method for the two channel quadrature
mirror filter banks”. Springer-SIViP, online, Nov., 2009.