This document proposes a new window function called the Exponential window. The Exponential window is derived similarly to the Kaiser window but uses an exponential function instead of a Bessel function. The spectrum design equations for the Exponential window are established by analyzing the relationships between its parameters and spectral characteristics. Comparisons show the Exponential window provides a better sidelobe roll-off ratio than the Kaiser window for the same mainlobe width, but a worse ripple ratio. It exhibits better ripple ratio than the ultraspherical window for narrower mainlobes and steeper sidelobe roll-off, but worse ripple ratio for wider mainlobes and shallower roll-off.
A MODIFIED DIRECTIONAL WEIGHTED CASCADED-MASK MEDIAN FILTER FOR REMOVAL OF RA...cscpconf
In this paper a Modified Directional Weighted Cascaded-Mask Median (MDWCMM) filter has
been proposed, which is based on three different sized cascaded filtering windows. The
differences between the current pixel and its neighbors aligned with four main directions. A
direction index is used for each edge aligned with a given direction. Then, the minimum of these
four direction indexes is used for impulse detection for each and every masking window.
Depending on the minimum direction indexes among the three windows one window is selected.
The filtering is done on this selected window. Extensive simulations showed that the MDWCMM
filter provides good performances of suppressing impulse with low noise level as well as for highly corrupted images from both gray level and colored benchmarked images.
The document describes the optimal linear filter, known as the Wiener filter. The Wiener filter provides the minimum mean square error (MMSE) estimate of a signal given observations that are corrupted by noise. The Wiener filter coefficients are determined by solving the Wiener-Hopf equations, which result from minimizing the mean square error between the estimated and actual signals. For a finite impulse response (FIR) Wiener filter, this yields a set of linear equations involving the autocorrelation of the observations and the cross-correlation between the signal and observations. The Wiener filter provides the optimal linear estimation of the desired signal within the observed data.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document presents a technique for estimating parameters of a deployable mesh reflector antenna using 3D coordinate data and least squares fitting. It involves determining the unknown coefficients of the general quadratic surface equation that best fits the 3D points. The shape of the surface is then estimated as an elliptic paraboloid based on its invariants. Key parameters of the elliptic paraboloid like the focal length are then determined by reconstructing the surface in its standard form based on the estimated coefficients and orientations. Estimating these parameters at different stages of deployment testing can help validate the stability of the antenna surface and placement of its feed.
An efficient approach to wavelet image Denoisingijcsit
This document proposes an efficient approach to wavelet image denoising based on minimizing mean squared error. It uses Stein's unbiased risk estimate (SURE), which provides an accurate estimate of mean squared error without needing the original noiseless image. The key idea is to express the thresholding function as a linear combination of thresholds, allowing the minimization problem to be solved via a simple linear system rather than a nonlinear optimization. Experimental results show the proposed method achieves superior image quality compared to other techniques like BayesShrink and VisuShrink.
RESOLVING CYCLIC AMBIGUITIES AND INCREASING ACCURACY AND RESOLUTION IN DOA ES...csandit
A method to resolve cyclic ambiguities and increase the accuracy and the resolution in the
direction-of-arrival (DOA) estimation using the Estimation of Signal Parameters via Rotational
Invariance Technique (ESPRIT)algorithm is proposed. It is based on rotating the array and
sampling the received signal at multiple positions. Using this approach, the gain in accuracy
and resolution is addressed as function of the mean and variance of the DOA. Simulations
results are provided as a means of verifying this analysis.
This document discusses Wiener filters and linear optimum filtering. It introduces Wiener filters, the mean square error (MSE) criterion, and derives the Wiener-Hopf equations which describe the optimum filter coefficients that minimize the MSE. It also discusses properties of the error performance surface including its canonical quadratic form and the minimum MSE value. Applications to channel equalization and linearly constrained minimum variance filtering are also covered.
Image Restitution Using Non-Locally Centralized Sparse Representation ModelIJERA Editor
Sparse representation models uses a linear combination of a few atoms selected from an over-completed
dictionary to code an image patch which have given good results in different image restitution applications. The
reconstruction of the original image is not so accurate using traditional models of sparse representation to solve
degradation problems which are blurring, noisy, and down-sampled. The goal of image restitution is to suppress
the sparse coding noise and to improve the image quality by using the concept of sparse representation. To
obtain a good sparse coding coefficients of the original image we exploit the image non-local self similarity and
then by centralizing the sparse coding coefficients of the observation image to those estimates. This non-locally
centralized sparse representation model outperforms standard sparse representation models in all aspects of
image restitution problems including de-noising, de-blurring, and super-resolution.
This document introduces expander graphs and summarizes Kolmogorov and Barzdin's proof on realizing networks in three-dimensional space, which was one of the first examples of expander graphs. It defines expander graphs as sparsely populated but well-connected graphs. Kolmogorov and Barzdin constructed random graphs with properties equivalent to expander graphs and used these properties to prove that most networks can be realized in a sphere with volume proportional to the number of vertices. Their proof established lower and upper bounds on the volume needed for realization and handled both bounded and unbounded branching networks.
A MODIFIED DIRECTIONAL WEIGHTED CASCADED-MASK MEDIAN FILTER FOR REMOVAL OF RA...cscpconf
In this paper a Modified Directional Weighted Cascaded-Mask Median (MDWCMM) filter has
been proposed, which is based on three different sized cascaded filtering windows. The
differences between the current pixel and its neighbors aligned with four main directions. A
direction index is used for each edge aligned with a given direction. Then, the minimum of these
four direction indexes is used for impulse detection for each and every masking window.
Depending on the minimum direction indexes among the three windows one window is selected.
The filtering is done on this selected window. Extensive simulations showed that the MDWCMM
filter provides good performances of suppressing impulse with low noise level as well as for highly corrupted images from both gray level and colored benchmarked images.
The document describes the optimal linear filter, known as the Wiener filter. The Wiener filter provides the minimum mean square error (MMSE) estimate of a signal given observations that are corrupted by noise. The Wiener filter coefficients are determined by solving the Wiener-Hopf equations, which result from minimizing the mean square error between the estimated and actual signals. For a finite impulse response (FIR) Wiener filter, this yields a set of linear equations involving the autocorrelation of the observations and the cross-correlation between the signal and observations. The Wiener filter provides the optimal linear estimation of the desired signal within the observed data.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document presents a technique for estimating parameters of a deployable mesh reflector antenna using 3D coordinate data and least squares fitting. It involves determining the unknown coefficients of the general quadratic surface equation that best fits the 3D points. The shape of the surface is then estimated as an elliptic paraboloid based on its invariants. Key parameters of the elliptic paraboloid like the focal length are then determined by reconstructing the surface in its standard form based on the estimated coefficients and orientations. Estimating these parameters at different stages of deployment testing can help validate the stability of the antenna surface and placement of its feed.
An efficient approach to wavelet image Denoisingijcsit
This document proposes an efficient approach to wavelet image denoising based on minimizing mean squared error. It uses Stein's unbiased risk estimate (SURE), which provides an accurate estimate of mean squared error without needing the original noiseless image. The key idea is to express the thresholding function as a linear combination of thresholds, allowing the minimization problem to be solved via a simple linear system rather than a nonlinear optimization. Experimental results show the proposed method achieves superior image quality compared to other techniques like BayesShrink and VisuShrink.
RESOLVING CYCLIC AMBIGUITIES AND INCREASING ACCURACY AND RESOLUTION IN DOA ES...csandit
A method to resolve cyclic ambiguities and increase the accuracy and the resolution in the
direction-of-arrival (DOA) estimation using the Estimation of Signal Parameters via Rotational
Invariance Technique (ESPRIT)algorithm is proposed. It is based on rotating the array and
sampling the received signal at multiple positions. Using this approach, the gain in accuracy
and resolution is addressed as function of the mean and variance of the DOA. Simulations
results are provided as a means of verifying this analysis.
This document discusses Wiener filters and linear optimum filtering. It introduces Wiener filters, the mean square error (MSE) criterion, and derives the Wiener-Hopf equations which describe the optimum filter coefficients that minimize the MSE. It also discusses properties of the error performance surface including its canonical quadratic form and the minimum MSE value. Applications to channel equalization and linearly constrained minimum variance filtering are also covered.
Image Restitution Using Non-Locally Centralized Sparse Representation ModelIJERA Editor
Sparse representation models uses a linear combination of a few atoms selected from an over-completed
dictionary to code an image patch which have given good results in different image restitution applications. The
reconstruction of the original image is not so accurate using traditional models of sparse representation to solve
degradation problems which are blurring, noisy, and down-sampled. The goal of image restitution is to suppress
the sparse coding noise and to improve the image quality by using the concept of sparse representation. To
obtain a good sparse coding coefficients of the original image we exploit the image non-local self similarity and
then by centralizing the sparse coding coefficients of the observation image to those estimates. This non-locally
centralized sparse representation model outperforms standard sparse representation models in all aspects of
image restitution problems including de-noising, de-blurring, and super-resolution.
This document introduces expander graphs and summarizes Kolmogorov and Barzdin's proof on realizing networks in three-dimensional space, which was one of the first examples of expander graphs. It defines expander graphs as sparsely populated but well-connected graphs. Kolmogorov and Barzdin constructed random graphs with properties equivalent to expander graphs and used these properties to prove that most networks can be realized in a sphere with volume proportional to the number of vertices. Their proof established lower and upper bounds on the volume needed for realization and handled both bounded and unbounded branching networks.
This document describes depth from defocus (DfD) techniques for estimating depth from a single image. It presents the theoretical foundations of camera geometry and point spread functions. It then describes two DfD methods - a Fourier transform method and S-transform method. It evaluates the Fourier method on synthetic images, finding it can estimate depth accurately for values beyond the camera's focal length but not before. However, the document concludes that while DfD shows promise, practical implementation is difficult due to assumptions required and sensitivity to errors.
Wavelets for computer_graphics_stollnitzJuliocaramba
This document provides an introduction to wavelets for computer graphics applications. It begins with an overview of how wavelet transforms can hierarchically decompose functions. It then describes the Haar wavelet basis, including how one-dimensional and two-dimensional signals can be decomposed into lower resolution approximations and detail coefficients. The document focuses on explaining the mathematical foundations of wavelet transforms using the Haar basis as an example, covering topics like multiresolution analysis, scaling functions, wavelets, and orthogonal bases. It aims to give intuition for what wavelets are and the theory needed to understand and apply them.
Super-resolution reconstruction is a method for reconstructing higher resolution images from a set of low resolution observations. The sub-pixel differences among different observations of the same scene allow to create higher resolution images with better quality. In the last thirty years, many methods for creating high resolution images have been proposed. However, hardware implementations of such methods are limited. Wiener filter design is one of the techniques we will use initially for this process. Wiener filter design involves matrix inversion. A novel method for the matrix inversion has been proposed in the report. QR decomposition will be the computational algorithm used using Givens Rotation.
This document describes using the linear sampling method to reconstruct the shape of two-dimensional dielectric targets from scattered field measurements. The linear sampling method solves an integral equation to determine if a test point is inside the target support. Regularization is used to address ill-posedness. Numerical results show the reconstructed shape varies slightly with frequency and accuracy improves with more transmitters/receivers. The method provides fast reconstruction of target support but not material properties. Future work includes extending to 3D imaging and using linear sampling method results to initialize other reconstruction algorithms.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...IJERA Editor
Due to the degradation of observed image the noisy, blurred, distorted image can be occurred .To restore the image informationby conventional modelsmay not be accurate enough for faithful reconstruction of the original image. I propose the sparse representations to improve the performance of based image restoration. In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, fordenoising the image here we use the histogram clipping method by using histogram based sparse representation to effectively reduce the noise and also implement the TMR filter for Quality image. Various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
The document discusses basic relationships between pixels in digital images. It defines that a pixel has 4 horizontal and vertical neighbors, called 4-neighbors. It also has 4 diagonal neighbors, and together with the 4-neighbors they form the 8-neighbors of a pixel. Adjacency between pixels is defined based on 4, 8 or m-connectivity depending on pixel intensity values. Connectivity and paths between pixels are also described. Regions in an image are defined as connected subsets of pixels, and region boundaries are pixels adjacent to the complement of the region.
The document discusses digital image processing techniques in the frequency domain. It begins by introducing the discrete Fourier transform (DFT) of one-variable functions and how it relates to sampling a continuous function. It then extends this concept to two-dimensional functions and images. Key topics covered include the 2D DFT and its properties such as translation, rotation, and periodicity. Aliasing in images is also discussed. The document provides examples of how to compute the DFT and inverse DFT of simple images.
This document discusses pixel relationships and neighborhood concepts in digital images. It defines a pixel and pixel connectivity. There are different types of pixel neighborhoods, including 4-neighbor, 8-neighbor, and diagonal neighbors. Connected components are sets of pixels that are connected based on pixel adjacency. Algorithms can label connected components and identify distinct image regions. Various distance measures quantify how close pixels are, such as Euclidean, Manhattan, and chessboard distances. Arithmetic and logical operators can combine pixel values from different images. Neighborhood operations apply functions to pixels based on their values and those of nearby pixels.
Using several mathematical examples from three different authors in texts from different courses this paper illustrates the easier way to avoid confusions and always get the correct results with the least effort was to use the proposed Excel Gamma function explained in detail for the proper use of the Q(z) and ercf(x) functions in most communication courses. The paper serves as a tutorial and introduction for such functions
The document discusses techniques for indexing electron diffraction patterns obtained from transmission electron microscopy. It describes how Bragg's law is used to index both ring patterns from polycrystalline samples and spot patterns from single crystal regions. Indexing ring patterns involves measuring ring diameters and calculating interplanar spacings, while indexing spot patterns requires measuring spot distances and angles to determine indices based on known crystal structures. Practice problems are provided to have students index selected electron diffraction patterns from copper and aluminum samples.
In this work, we study H∞ control wind turbine fuzzy model for finite frequency(FF) interval. Less conservative results are obtained by using Finsler’s lemma technique, generalized Kalman Yakubovich Popov (gKYP), linear matrix inequality (LMI) approach and added several separate parameters, these conditions are given in terms of LMI which can be efficiently solved numerically for the problem that such fuzzy systems are admissible with H∞ disturbance attenuation level. The FF H∞ performance approach allows the state feedback command in a specific interval, the simulation example is given to validate our results.
This document provides a course calendar for a machine learning course with the following contents:
- The course covers topics like Bayesian estimation, Kalman filters, particle filters, hidden Markov models, Bayesian decision theory, principal component analysis, independent component analysis, and clustering algorithms over 13 classes between September and January.
- One lecture plan discusses nonparametric density estimation approaches like histogram density estimation, kernel density estimation, and k-nearest neighbor density estimation. It also covers cross-validation techniques.
- Another document section provides an example of applying kernel density estimation and k-nearest neighbor classification to automatically sort fish based on lightness, including discussing training and test phase classification. It compares different bandwidths and values of k.
This document provides a summary of a lecture on simulation-based Bayesian estimation methods, specifically particle filters. It begins by explaining why simulation-based methods are needed for nonlinear and non-Gaussian problems where analytical solutions are not possible. It then discusses Monte Carlo sampling methods including historical examples, Monte Carlo integration to approximate integrals, and importance sampling to generate samples from a target distribution. The key steps of importance sampling are outlined.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
The document discusses various concepts related to digital image processing including:
1) The relationships between pixels in an image including 4-neighbors, 8-neighbors, and m-neighbors of a pixel.
2) The concepts of adjacency and connectivity between pixels based on their intensity values and whether they are neighbors.
3) Computing the shortest path between two pixels using 4, 8, or m-adjacency and examples calculating these paths.
Kellen Betts implemented two image processing techniques, linear filtering and diffusion, to repair corrupted images of Derek Zoolander. For images with global noise, linear filtering using Gaussian and Shannon filters achieved moderate success in denoising. Diffusion was more effective for images where noise was confined to a small region due to its ability to target specific image areas. The diffusion process nearly perfectly restored these localized noise images. A combination of linear filtering and diffusion provided only minimal improvement over the individual methods.
The document summarizes key concepts in image formation, including how light interacts with objects and lenses to form images, and how different imaging systems like the human eye and digital cameras work. It discusses factors that affect image quality such as point spread functions and noise. Methods for analyzing the effects of noise propagation and algorithms on image quality are presented, such as error propagation techniques and Monte Carlo simulations.
C OMPARISON OF M ODERN D ESCRIPTION M ETHODS F OR T HE R ECOGNITION OF ...sipij
Plants are one kingdom of living things. They are e
ssential to the balance of nature and people’s live
s.
Plants are not just important to human environment,
they form the basis for the sustainability and lon
g-
term health of environmental systems. Beside these
important facts, they have many useful applications
such as medical application and agricultural applic
ation. Also plants are the origin of coal and petro
leum.
In order to plant recognition, one part of it has u
nique characteristic for recognition process. This
desired
part is leaf. The present paper introduces bag of w
ords (BoW) and support vector machine (SVM)
procedure to recognize and identify plants through
leaves. Visual contents of images are applied and t
hree
usual phases in computer vision are done: (i) featu
re detection, (ii) feature description, (iii) image
description. Three different methods are used on Fl
avia dataset. The proposed approach is done by scal
e
invariant feature transform (SIFT) method and two c
ombined method, HARRIS-SIFT and features from
accelerated segment test-SIFT (FAST-SIFT). The accu
racy of SIFT method is higher than other methods
which is 89.3519 %. Vision comparison is investigat
ed for four different species. Some quantitative re
sults
are measured and compared.
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
A ROBUST CHAOTIC AND FAST WALSH TRANSFORM ENCRYPTION FOR GRAY SCALE BIOMEDICA...sipij
In this work, a new scheme of image encryption based on chaos and Fast Walsh Transform (FWT) has been proposed.
We used two chaotic logistic maps and combined chaotic encryption methods to the two-dimensional FWT of images.
The encryption process involves two steps: firstly, chaotic sequences generated by the chaotic logistic maps are used to
permute and mask the intermediate results or array of FWT, the next step consist in changing the chaotic sequences or
the initial conditions of chaotic logistic maps among two intermediate results of the same row or column. Changing the
encryption key several times on the same row or column makes the cipher more robust against any attack. We tested
our algorithms on many biomedical images. We also used images from data bases to compare our algorithm to those
in literature. It comes out from statistical analysis and key sensitivity tests that our proposed image encryption schemeprovides an efficient and secure way for real-time encryption and transmission biomedical images.
This document describes depth from defocus (DfD) techniques for estimating depth from a single image. It presents the theoretical foundations of camera geometry and point spread functions. It then describes two DfD methods - a Fourier transform method and S-transform method. It evaluates the Fourier method on synthetic images, finding it can estimate depth accurately for values beyond the camera's focal length but not before. However, the document concludes that while DfD shows promise, practical implementation is difficult due to assumptions required and sensitivity to errors.
Wavelets for computer_graphics_stollnitzJuliocaramba
This document provides an introduction to wavelets for computer graphics applications. It begins with an overview of how wavelet transforms can hierarchically decompose functions. It then describes the Haar wavelet basis, including how one-dimensional and two-dimensional signals can be decomposed into lower resolution approximations and detail coefficients. The document focuses on explaining the mathematical foundations of wavelet transforms using the Haar basis as an example, covering topics like multiresolution analysis, scaling functions, wavelets, and orthogonal bases. It aims to give intuition for what wavelets are and the theory needed to understand and apply them.
Super-resolution reconstruction is a method for reconstructing higher resolution images from a set of low resolution observations. The sub-pixel differences among different observations of the same scene allow to create higher resolution images with better quality. In the last thirty years, many methods for creating high resolution images have been proposed. However, hardware implementations of such methods are limited. Wiener filter design is one of the techniques we will use initially for this process. Wiener filter design involves matrix inversion. A novel method for the matrix inversion has been proposed in the report. QR decomposition will be the computational algorithm used using Givens Rotation.
This document describes using the linear sampling method to reconstruct the shape of two-dimensional dielectric targets from scattered field measurements. The linear sampling method solves an integral equation to determine if a test point is inside the target support. Regularization is used to address ill-posedness. Numerical results show the reconstructed shape varies slightly with frequency and accuracy improves with more transmitters/receivers. The method provides fast reconstruction of target support but not material properties. Future work includes extending to 3D imaging and using linear sampling method results to initialize other reconstruction algorithms.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...IJERA Editor
Due to the degradation of observed image the noisy, blurred, distorted image can be occurred .To restore the image informationby conventional modelsmay not be accurate enough for faithful reconstruction of the original image. I propose the sparse representations to improve the performance of based image restoration. In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, fordenoising the image here we use the histogram clipping method by using histogram based sparse representation to effectively reduce the noise and also implement the TMR filter for Quality image. Various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
The document discusses basic relationships between pixels in digital images. It defines that a pixel has 4 horizontal and vertical neighbors, called 4-neighbors. It also has 4 diagonal neighbors, and together with the 4-neighbors they form the 8-neighbors of a pixel. Adjacency between pixels is defined based on 4, 8 or m-connectivity depending on pixel intensity values. Connectivity and paths between pixels are also described. Regions in an image are defined as connected subsets of pixels, and region boundaries are pixels adjacent to the complement of the region.
The document discusses digital image processing techniques in the frequency domain. It begins by introducing the discrete Fourier transform (DFT) of one-variable functions and how it relates to sampling a continuous function. It then extends this concept to two-dimensional functions and images. Key topics covered include the 2D DFT and its properties such as translation, rotation, and periodicity. Aliasing in images is also discussed. The document provides examples of how to compute the DFT and inverse DFT of simple images.
This document discusses pixel relationships and neighborhood concepts in digital images. It defines a pixel and pixel connectivity. There are different types of pixel neighborhoods, including 4-neighbor, 8-neighbor, and diagonal neighbors. Connected components are sets of pixels that are connected based on pixel adjacency. Algorithms can label connected components and identify distinct image regions. Various distance measures quantify how close pixels are, such as Euclidean, Manhattan, and chessboard distances. Arithmetic and logical operators can combine pixel values from different images. Neighborhood operations apply functions to pixels based on their values and those of nearby pixels.
Using several mathematical examples from three different authors in texts from different courses this paper illustrates the easier way to avoid confusions and always get the correct results with the least effort was to use the proposed Excel Gamma function explained in detail for the proper use of the Q(z) and ercf(x) functions in most communication courses. The paper serves as a tutorial and introduction for such functions
The document discusses techniques for indexing electron diffraction patterns obtained from transmission electron microscopy. It describes how Bragg's law is used to index both ring patterns from polycrystalline samples and spot patterns from single crystal regions. Indexing ring patterns involves measuring ring diameters and calculating interplanar spacings, while indexing spot patterns requires measuring spot distances and angles to determine indices based on known crystal structures. Practice problems are provided to have students index selected electron diffraction patterns from copper and aluminum samples.
In this work, we study H∞ control wind turbine fuzzy model for finite frequency(FF) interval. Less conservative results are obtained by using Finsler’s lemma technique, generalized Kalman Yakubovich Popov (gKYP), linear matrix inequality (LMI) approach and added several separate parameters, these conditions are given in terms of LMI which can be efficiently solved numerically for the problem that such fuzzy systems are admissible with H∞ disturbance attenuation level. The FF H∞ performance approach allows the state feedback command in a specific interval, the simulation example is given to validate our results.
This document provides a course calendar for a machine learning course with the following contents:
- The course covers topics like Bayesian estimation, Kalman filters, particle filters, hidden Markov models, Bayesian decision theory, principal component analysis, independent component analysis, and clustering algorithms over 13 classes between September and January.
- One lecture plan discusses nonparametric density estimation approaches like histogram density estimation, kernel density estimation, and k-nearest neighbor density estimation. It also covers cross-validation techniques.
- Another document section provides an example of applying kernel density estimation and k-nearest neighbor classification to automatically sort fish based on lightness, including discussing training and test phase classification. It compares different bandwidths and values of k.
This document provides a summary of a lecture on simulation-based Bayesian estimation methods, specifically particle filters. It begins by explaining why simulation-based methods are needed for nonlinear and non-Gaussian problems where analytical solutions are not possible. It then discusses Monte Carlo sampling methods including historical examples, Monte Carlo integration to approximate integrals, and importance sampling to generate samples from a target distribution. The key steps of importance sampling are outlined.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
The document discusses various concepts related to digital image processing including:
1) The relationships between pixels in an image including 4-neighbors, 8-neighbors, and m-neighbors of a pixel.
2) The concepts of adjacency and connectivity between pixels based on their intensity values and whether they are neighbors.
3) Computing the shortest path between two pixels using 4, 8, or m-adjacency and examples calculating these paths.
Kellen Betts implemented two image processing techniques, linear filtering and diffusion, to repair corrupted images of Derek Zoolander. For images with global noise, linear filtering using Gaussian and Shannon filters achieved moderate success in denoising. Diffusion was more effective for images where noise was confined to a small region due to its ability to target specific image areas. The diffusion process nearly perfectly restored these localized noise images. A combination of linear filtering and diffusion provided only minimal improvement over the individual methods.
The document summarizes key concepts in image formation, including how light interacts with objects and lenses to form images, and how different imaging systems like the human eye and digital cameras work. It discusses factors that affect image quality such as point spread functions and noise. Methods for analyzing the effects of noise propagation and algorithms on image quality are presented, such as error propagation techniques and Monte Carlo simulations.
C OMPARISON OF M ODERN D ESCRIPTION M ETHODS F OR T HE R ECOGNITION OF ...sipij
Plants are one kingdom of living things. They are e
ssential to the balance of nature and people’s live
s.
Plants are not just important to human environment,
they form the basis for the sustainability and lon
g-
term health of environmental systems. Beside these
important facts, they have many useful applications
such as medical application and agricultural applic
ation. Also plants are the origin of coal and petro
leum.
In order to plant recognition, one part of it has u
nique characteristic for recognition process. This
desired
part is leaf. The present paper introduces bag of w
ords (BoW) and support vector machine (SVM)
procedure to recognize and identify plants through
leaves. Visual contents of images are applied and t
hree
usual phases in computer vision are done: (i) featu
re detection, (ii) feature description, (iii) image
description. Three different methods are used on Fl
avia dataset. The proposed approach is done by scal
e
invariant feature transform (SIFT) method and two c
ombined method, HARRIS-SIFT and features from
accelerated segment test-SIFT (FAST-SIFT). The accu
racy of SIFT method is higher than other methods
which is 89.3519 %. Vision comparison is investigat
ed for four different species. Some quantitative re
sults
are measured and compared.
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
A ROBUST CHAOTIC AND FAST WALSH TRANSFORM ENCRYPTION FOR GRAY SCALE BIOMEDICA...sipij
In this work, a new scheme of image encryption based on chaos and Fast Walsh Transform (FWT) has been proposed.
We used two chaotic logistic maps and combined chaotic encryption methods to the two-dimensional FWT of images.
The encryption process involves two steps: firstly, chaotic sequences generated by the chaotic logistic maps are used to
permute and mask the intermediate results or array of FWT, the next step consist in changing the chaotic sequences or
the initial conditions of chaotic logistic maps among two intermediate results of the same row or column. Changing the
encryption key several times on the same row or column makes the cipher more robust against any attack. We tested
our algorithms on many biomedical images. We also used images from data bases to compare our algorithm to those
in literature. It comes out from statistical analysis and key sensitivity tests that our proposed image encryption schemeprovides an efficient and secure way for real-time encryption and transmission biomedical images.
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...sipij
Digital Signal Processing functions are widely used in real time high speed applications. Those functions
are generally implemented either on ASICs with inflexibility, or on FPGAs with bottlenecks of relatively
smaller utilization factor or lower speed compared to ASIC. The proposed reconfigurable DSP processor is
redolent to FPGA, but with basic fixed Common Modules (CMs) (like adders, subtractors, multipliers,
scaling units, shifters) instead of CLBs. This paper introduces the development of a reconfigurable DSP
processor that integrates different filter and transform functions. The switching between DSP functions is
occurred by reconfiguring the interconnection between CMs. Validation of the proposed reconfigurable
architecture has been achieved on Virtex5 FPGA. The architecture provides sufficient amount of flexibility,
parallelism and scalability.
E FFECTIVE P ROCESSING A ND A NALYSIS OF R ADIOTHERAPY I MAGESsipij
a-Si Electronic Portal Imaging Device (EPID) is an
important tool to verify the location of the radiat
ion
therapy beam with respect to the patient anatomy. B
ut, Electronic Portal Images (EPI) suffer from low
contrast. In order to have better in-treatment imag
es to extract relevant features of the anatomy, ima
ge
processing tools need to be integrated in the Radio
logy systems. The goal of this research work is to
inspect
several image processing techniques for contrast en
hancement of electronic portal images and gauge
parameters like mean, variance, standard deviation,
MSE, RMSE, entropy, PSNR, AMBE, normalised cross
correlation, average difference, structural content
(SC), maximum difference and normalised absolute
error (NAE) to study their visual quality improvem
ent. In addition, by adding salt and pepper noise,
Gaussian noise and motion blur, we calculate error
measurement parameters like Universal Image Quality
(UIQ) index, Enhancement Measurement Error (EME), P
earson Correlation Coefficient, SNR and Mean
Absolute error (MAE). The improved results point ou
t that image processing tools need to be incorporat
ed
into radiology for accurate delivery of dose
Development and Hardware Implementation of an Efficient Algorithm for Cloud D...sipij
This document discusses the development and hardware implementation of an efficient algorithm for cloud detection from satellite images. The algorithm uses an adaptive thresholding approach to segment clouds from background pixels in satellite imagery. It then determines the position of the segmented clouds to calculate cloud coverage percentages. The algorithm was tested on satellite images from Spot4 and Landsat archives. It was implemented on a TMS320C6713 DSK processor using Code Composer Studio and achieved accurate cloud detection and coverage calculation on images with resolutions up to 3600x3000 pixels.
CONTRAST ENHANCEMENT AND BRIGHTNESS PRESERVATION USING MULTIDECOMPOSITION HIS...sipij
Histogram Equalization (HE) has been an essential addition to the Image Enhancement world.
Enhancement techniques like Classical Histogram Equalization(CHE),Adaptive Histogram Equalization
(AHE), Bi-Histogram Equalization (BHE) and Recursive Mean Separate Histogram Equalization (RMSHE)
methods enhance contrast, brightness is not well preserved, which gives an unpleasant look to the final
image obtained. Thus, we introduce a novel technique Multi-Decomposition Histogram Equalization
(MDHE) to eliminate the drawbacks of the earlier methods. In MDHE, we have decomposed the input
image using a unique logic, applied CHE in each of the sub-images and then finally interpolated them in
correct order. The final image after MDHE gives us the best results based on contrast enhancement and
brightness preservation aspect compared to all other techniques mentioned above. We have calculated the
various parameters like PSNR, SNR, RMSE, MSE, etc. for every technique. Our results are well supported
by bar graphs, histograms and the parameter calculations at the end.
Enhancement performance of road recognition system of autonomous robots in sh...sipij
This document summarizes a research paper that proposed an algorithm to improve road recognition for autonomous vehicles in shadow scenarios. The researchers conducted experiments to test their algorithm's performance on key metrics like true positive rate and error rate. Their algorithm first converted images to HSV color space to detect shadows, then used normalized difference index and morphological operations to eliminate shadow effects before segmentation and classification. Test results showed their algorithm enhanced road recognition in the presence of shadows, advancing autonomous vehicle navigation capabilities.
Holistic privacy impact assessment framework for video privacy filtering tech...sipij
The document presents a Holistic Privacy Impact Assessment Framework (H-PIA) for evaluating video privacy filtering technologies. The framework is based on the UI-REF normative ethno-methodological framework for Privacy-by-Co-Design. The H-PIA framework integrates key holistic performance indicators (KPIs) comprising objective and subjective evaluation metrics. It assesses the optimal balance of privacy protection and security assurance for privacy filtering solutions negotiated through user-centered co-design. Both objective automated tests and a subjective user study are conducted to evaluate filtering performance based on criteria like efficacy, consistency, disambiguity, and intelligibility. Results confirm the framework enables optimally balanced privacy filtering while retaining necessary information.
Implementation of features dynamic tracking filter to tracing pupilssipij
The objective of this paper is to show the implementation of an artificial vision filter capable of tracking the
pupils of a person in a video sequence. There are several algorithms that can achieve this objective, for this
case, features dynamic tracking selected, which is a method that traces patterns between each frame that
form a video scene, this type of processing offers the advantage of eliminating the problems of occlusion
patterns of interest. The implementation was tested on a base of videos of people with different physical
characteristics of the eyes. An additional goal is to obtain information of the eye movements that are
captured and pupil coordinates for each of these movements. These data could help some studies related to
eye health.
A Comparative Study of DOA Estimation Algorithms with Application to Tracking...sipij
Tracking the Direction of Arrival (DOA) Estimation of a moving source is an important and challenging
task in the field of navigation, RADAR, SONAR, Wireless Sensor Networks (WSNs) etc. Tracking is carried
out starting from the estimation of DOA, considering the estimated DOA as an initial value, the Kalman
Filter (KF) algorithm is used to track the moving source based on the motion model which governs the
motion of the source. This comparative study deals with analysis, significance of Non-coherent,
Narrowband DOA (Direction of Arrival) Estimation Algorithms in perception to tracking. The DOA
estimation algorithms Multiple Signal Classification (MUSIC), Root-MUSIC& Estimation of Signal
Parameters via Rotational Invariance Technique (ESPRIT) are considered for the purpose of the study, a
comparison in terms of optimality with respect to Signal to Noise Ratio (SNR), number of snapshots and
number of Antenna elements used and Computational complexity is drawn between the chosen algorithms
resulting in an optimum DOA estimate. The optimum DOA Estimate is taken as an initial value for the
Kalman filter tracking algorithm. The Kalman filter algorithm is used to track the optimum DOA Estimate.
Image processing based girth monitoring and recording system for rubber plant...sipij
Measuring the girth and continuous monitoring of the increase in girth is one of the most important processes in rubber plantations since identification of girth deficiencies would enable planters to take corrective actions to ensure a good yield from the plantation.
This research paper presents an image processing based girth measurement & recording system that can replace existing manual process in an efficient and economical manner.
The system uses a digital image of the tree which uses the current number drawn on the tree to identify the tree number & its width. The image is threshold first & then filtered out using several filtering criterion to identify possible candidates for numbers. Identified blobs are then fed to the Tesseract OCR for number recognition. Threshold image is then filtered again with different criterion to segment out the black strip drawn on the tree which is then used to calculate the width of the tree using calibration parameters. Once the tree number is identified & width is calculated the girth the measured girth of the tree is stored in the data base under the identified tree number.
The results obtained from the system indicated significant improvement in efficiency & economy for main plantations. As future developments we are proposing a standard commercial system for girth measurement using standardized 2D Bar Codes as tree identifiers
Analog signal processing approach for coarse and fine depth estimationsipij
Imaging and Image sensors is a field that is continuously evolving. There are new products coming into the
market every day. Some of these have very severe Size, Weight and Power constraints whereas other
devices have to handle very high computational loads. Some require both these conditions to be met
simultaneously. Current imaging architectures and digital image processing solutions will not be able to
meet these ever increasing demands. There is a need to develop novel imaging architectures and image
processing solutions to address these requirements. In this work we propose analog signal processing as a
solution to this problem. The analog processor is not suggested as a replacement to a digital processor but
it will be used as an augmentation device which works in parallel with the digital processor, making the
system faster and more efficient. In order to show the merits of analog processing two stereo
correspondence algorithms are implemented. We propose novel modifications to the algorithms and new
imaging architectures which, significantly reduces the computation time
Modified approach to transform arc from text to linear form text a preproces...sipij
Arc-form-text is an artistic-text which is quite common in several documents such as certificates,
advertisements and history documents. OCRs fail to read such arc-form-text and it is necessary to
transform the same to linear-form-text at preprocessing stage. In this paper, we present a modification to
an existing transformation model for better readability by OCRs. The method takes the segmented arcform-
text as input. Initially two concentric ellipses are approximated to enclose the arc-form-text and later
the modified transformation model transforms the text in arc-form to linear-form. The proposed method is
implemented on several upper semi-circular arc-form-text inputs and the readability of the transformed text
is analyzed with an OCR.
Skin cure an innovative smart phone based application to assist in melanoma e...sipij
This document proposes a smart phone application called SKINcure that aims to assist with melanoma early detection and prevention. The application has two main components: 1) a UV alert module that notifies users of sunburn risk and calculates time to burn, and 2) an image analysis module that allows users to take skin images and classifies them as normal, atypical, or melanoma with 96.3-97.5% accuracy by analyzing features like hair detection, lesion segmentation, and classification algorithms. The proposed system utilizes a dermoscopy image database containing 200 images for development and testing, achieving high accuracy in detecting different lesion types automatically.
A STUDY FOR THE EFFECT OF THE EMPHATICNESS AND LANGUAGE AND DIALECT FOR VOIC...sipij
This study analyzes voice onset time (VOT) values for four stop consonants in Modern Standard Arabic: /d/, /t/, /d?
/, and /t?
/. The student researcher builds a database of carrier words with a CV-CV-CV syllable structure and computes VOT values. The main findings are that VOT values for the emphatic sounds (/d?
/ and /t?
/) are consistently lower than for their non-emphatic counterparts (/d/ and /t/), and that VOT can distinguish Arabic dialects. The study aims to address the lack of research analyzing phonetic features of Arabic and to help automatic speech and language processing systems.
SENSORLESS VECTOR CONTROL OF BLDC USING EXTENDED KALMAN FILTER sipij
This Paper mainly deals with the implementation of vector control technique using the brushless DC motor
(BLDC). Generally tachogenerators, resolvers or incremental encoders are used to detect the speed. These
sensors require careful mounting and alignment, and special attention is required with electrical noises. A
speed sensor need additional space for mounting and maintenance and hence increases the cost and size of
the drive system. These problems are eliminated by speed sensor less vector control by using Extended
Kalman Filter and Back EMF method for position sensing. By using the EKF method and Back EMFmethod, the sensor less vector control of BLDC is implemented and its simulation using MATLAB/SIMULINK and hardware kit is implemented.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
GABOR WAVELETS AND MORPHOLOGICAL SHARED WEIGHTED NEURAL NETWORK BASED AUTOMAT...sipij
1) The document proposes an automatic face recognition system using Gabor wavelet face detection with neural networks and morphological shared weighted neural networks (MSNN) for face recognition.
2) Face detection is performed using Gabor filters for feature extraction and a neural network for classification. Detected faces are input to the MSNN for face recognition.
3) The MSNN uses hit-miss transforms for feature extraction in each layer, which are independent of grayscale shifts. Feature matching compares output thresholds to identify faces.
IRJET- Comparative Study of Sidelobe Roll-Off Ratio for Various Window Functi...IRJET Journal
This document compares the sidelobe roll-off ratio of the Kaiser, Cosh, and Exponential window functions. It shows through simulation results that the Exponential window provides the highest sidelobe roll-off ratio compared to the other windows. A low pass FIR filter is designed using each window function, and the filter designed with the Exponential window achieves the maximum far-end stopband attenuation.
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
The document describes a novel algorithm for despeckling synthetic aperture radar (SAR) images using particle swarm optimization (PSO) in the curvelet domain. The algorithm first identifies homogeneous regions in the speckled image using variance calculations. It then uses PSO to optimize the thresholding of curvelet coefficients, with the objective of minimizing the average power spectral value. This provides an optimized threshold to apply curvelet-based despeckling. The proposed method is tested on standard images and shown to outperform conventional filters like median and Lee filters in reducing speckle noise.
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
Synthetic Aperture Radar (SAR) images are inherently affected by multiplicative speckle noise, due to the coherent nature of scattering phenomena. In this paper, a novel algorithm capable of suppressing speckle noise using Particle Swarm Optimization (PSO) technique is presented. The algorithm initially identifies homogenous region from the corrupted image and uses PSO to optimize the Thresholding of curvelet coefficients to recover the original image. Average Power Spectrum Value (APSV) has been used as objective function of PSO. The Proposed algorithm removes Speckle noise effectively and the performance of the algorithm is tested and compared with Mean filter, Median filter, Lee filter, Statistic Lee filter, Kuan filter, frost filter and gamma filter., outperforming conventional filtering methods.
This document summarizes a research paper on using wavelet neural networks (WNNs) for adaptive equalization in digital communication systems. The paper proposes using WNNs structured with wavelet basis functions as the activation functions. The orthogonal least squares (OLS) algorithm is then used to update the weighting matrix and select the most important wavelet basis units, reducing redundancy. The experimental results showed that a WNN equalizer using OLS outperformed conventional neural network equalizers in terms of signal-to-noise ratio and ability to handle non-linear channels.
This document summarizes a research paper on using wavelet neural networks (WNNs) for adaptive equalization in digital communication systems. The paper proposes using WNNs structured with wavelet basis functions as the activation functions. The orthogonal least squares (OLS) algorithm is then used to update the weighting matrix and select the most important wavelet basis units, reducing redundancy. The experimental results showed that a WNN equalizer using OLS outperformed conventional neural network equalizers in terms of signal-to-noise ratio and ability to handle non-linear channels.
This document analyzes noise estimation and power spectrum analysis using different window techniques. It summarizes the results of applying rectangular, triangular, Hanning, Hamming, Kaiser, Blackman, and Chebyshev windows to a 500 sample length signal with a sampling frequency of 500 Hz. For each window, it provides the sample where the signal peaks, the peak magnitude, the peak noise value, and the frequency where peak noise occurs based on the windowed signal's power spectrum. The document concludes that different window functions produce different levels of noise reduction when estimating the power spectrum density of a random signal.
This document summarizes a research paper that compares different digital filtering techniques for removing noise from electrocardiogram (ECG) signals. It describes how finite impulse response (FIR) filters were designed using various windowing techniques, including rectangular, Hamming, Hanning, and Blackman windows. Infinite impulse response (IIR) filters and wavelet transforms were also evaluated for denoising ECG signals. The performance of the different filtering approaches were compared based on the power spectral density and average power of the signals before and after filtering. The paper found that an FIR filter designed with the Kaiser window showed the best results for noise removal from ECG signals.
Design And Performance of Finite impulse Response Filter Using Hyperbolic Cos...IDES Editor
In this paper a proposed of design and analysis of
Finite impulse response filter using Hyperbolic Cosine
window (Cosh window for short). This window is very useful
for some applications such as beam forming, filter design,
and speech processing. Digital FIR filter designed by Kaiser
window has a better far-end stop-band attenuation than filter
designed by the other previously well known adjustable
windows such as Dolph-Chebyshev and Saramaki, which are
special cases of Ultraspherical windows, but obtaining a digital
filter which performs higher far-end stop band attenuation
than Kaiser window will be useful. In this paper, the design of
nonrecursive digital FIR filter has been proposed by using
Cosh window. It provides better side lobe roll-off ratio & farend
stop band attenuation than filter designed by well known
Kaiser window, which is the advantage of filter designed by
Cosh window over filter designed by Kaiser window. An
expression for the side lobe & far field level has been developed.
Simulation & experimental results showing a good agreement
with theory has been provided
This document summarizes the steps to perform colored inversion (CI) on seismic data to obtain relative acoustic impedance values. CI involves: 1) Fitting a function to the log spectrum to model it, 2) Computing the difference between the modeled log spectrum and the seismic spectrum, 3) Converting the difference spectrum to an inversion operator, 4) Convolving the operator with the seismic data to obtain relative impedance values. As a quality control, the output impedance spectrum can be checked against the input log spectrum. The document provides code to implement this CI workflow using open-source Python libraries on a dataset from the Netherlands. CI produces informative relative impedance images to aid seismic interpretation.
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...CSCJournals
A simple and fast genetic algorithm (GA) developed to reduce the sidelobes in non-uniformly spaced linear antenna arrays. The proposed GA algorithm optimizes two vectors of variables to increase the Main lobe to Sidelobe power ratio (M/S) of array’s radiation pattern. The algorithm, in the first phase calculates the positions of the array elements and in the second phase, it manipulates the amplitude of excitation signals for each element. The simulations performed for 16 and 24 elements array structure. The results indicated that M/S improved in first phase from 13.2 to over 22.2dB meanwhile the half power beamwidth (HPBW) left almost unchanged. After element replacement, in the second phase, by using amplitude tapering further improvement up to 32dB was achieved. Also, the simulations shown that after element space perturbation, some antenna elements can be merged together without any performance degradation in radiation pattern in terms of gain and sidelobes level.
This document summarizes a digital signal processing project that involves resampling audio signals and modeling signals using autoregressive (AR) processes.
The resampling part involves downsampling two audio signals with correct and incorrect sampling rate conversions. Graphs and analysis show the resampled signals have lower quality and more distortion compared to the originals.
The AR modeling part estimates AR model coefficients from one of the signals using the Yule-Walker equations. A filter is designed to "whiten" the signal, removing noise. Graphs and audio comparison show the filtered signal has less noise but also some quality loss.
Designing geometric parameters of axisymmetrical cassegrain antenna and corru...Editor Jacotech
Early detection of faults occurring in three-phase induction motors can appreciably reduce the costs of maintenance, which could otherwise be too much costly to repair. Internal faults in three phase induction motors can result in significant performance degradation and eventual system failures. Artificial intelligence techniques have numerous advantages over conventional Model-based and Signal Processing fault diagnostic approaches; therefore, in this paper, a soft-computing system was studied through Neural Network Analysis to detect and diagnose the stator and rotor faults. The fault diagnostic system for three-phase induction motors samples the fault symptoms and then uses a Neural Network model to first train and then identify the fault which gives fast accurate diagnostics. This approach can also be extended to other applications.
The window functions used for digital filter design are used to eliminate oscillations in
the FIR (Finite Impulse Response) filter design. In this work, the use of Particle Swarm Optimization
(PSO) algorithm is proposed in the design of cosh window function, in which has widely used in the
literature and has useful spectral parameters. The cosh window is a window function derived from the
Kaiser window. It is more advantageous than the Kaiser window because there is no power series
expansion in the time domain representation. The designed window function shows better ripple ratio
characteristics than other window functions commonly used in the literature. The results obtained
were presented in tables and figures and successful results were obtained
This document provides an introduction to signal processing techniques for analytical chemistry. It discusses basic operations like addition, subtraction, multiplication and division of signals. It also covers smoothing, differentiation, resolution enhancement, harmonic analysis, convolution and other techniques. Key aspects of signals and noise are described, including distinguishing signal from noise, measuring signal-to-noise ratio, and improving signals through techniques like ensemble averaging. Examples are provided using the free SPECTRUM software and MATLAB.
Windows used in FIR Filters optimized for Far-side Stop band Attenuation (FSA...IJERA Editor
It has been proposed that the Exponential window provides better side-lobe roll-off ratio than Kaiser window
which is very useful for some applications such as beam forming, filter design, and speech processing. In this
paper the second application i.e. design of digital nonrecursive Finite Impulse Response (FIR) filter by using
Exponential window is proposed. The far-end stopband attenuation is most significant parameter when the
signal to be filtered has great concentration of spectral energy. The filter should be designed in such a way so
that it can provide better far-end stopband attenuation (amplitude of last ripple in stopband). Digital FIR filter
designed by Kaiser window has a better far-end stopband attenuation than filter designed by the other previously
well known adjustable windows such as Dolph-Chebyshev and aramaki, which are special cases of
Ultraspherical windows, but obtaining a digital filter which performs higher far-end stopband attenuation than
Kaiser window will be useful. In this paper, the design of nonrecursive digital FIR filter has been proposed by
using Exponential window. It provides better far-end stopband attenuation than filter designed by well known
Kaiser window, which is the advantage of filter designed by Exponential window. The proposed schemes were
simulated on commercially available software and the results show the close agreement with proposed theory.
This document contains an analysis of eye diagrams and the Q-function for digital communication. It includes:
1) An explanation of how eye diagrams are generated and their purpose in analyzing communication systems. Eye diagrams provide an evaluation of signal-to-noise ratio, jitter, and other factors.
2) Code to generate eye diagrams using different bandwidths and an analysis of how bandwidth affects eye opening.
3) An explanation of the Q-function and its relationship to the normal probability density function, representing the probability of a value being above a threshold.
4) Scripts to plot the Q-function and illustrate its application in calculating bit error probability for digital transmission systems.
- The document analyzes the performance of an arctangent discriminator for phase locked loops used in carrier tracking for GNSS receivers.
- It derives a closed-form expression for the probability distribution of the discriminator output as a function of carrier-to-noise density (CN0), allowing computation of the standard deviation without simulations.
- As an example application, it uses the model to analyze the effects of ionospheric scintillation on tracking error variance for a Galileo receiver.
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...CSCJournals
Synthetic aperture radar (SAR) data represents a significant resource of information for a large variety of researchers. Thus, there is a strong interest in developing data encoding and decoding algorithms which can obtain higher compression ratios while keeping image quality to an acceptable level. In this work, results of different wavelet-based image compression and segmentation based wavelet image compression are assessed through controlled experiments on synthetic SAR images. The effects of dissimilar wavelet functions, number of decompositions are examined in order to find optimal family for SAR images. The choice of optimal wavelets in segmentation based wavelet image compression is coiflet for low frequency and high frequency component. The results presented here is a good reference for SAR application developers to choose the wavelet families and also it concludes that wavelets transform is rapid, robust and reliable tool for SAR image compression. Numerical results confirm the potency of this approach.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document presents a methodology for designing low error fixed width adaptive multipliers. It begins by discussing Baugh-Wooley multiplication, which produces a 2n-bit output from n-bit inputs. For digital signal processing applications, only an n-bit output is required. Direct truncation introduces errors. The methodology proposes using a generalized index and binary thresholding to derive an error-compensation bias to reduce truncation errors. It defines different types of binary thresholding and analyzes statistics to determine average bias values. The proposed fixed width multiplier is intended to have better error performance than other existing multiplier structures.
This document discusses the effects of finite word length in digital filters. It begins with an introduction to the topic, explaining that quantization of filter coefficients and operations leads to nonlinearities that change the filter response. It then provides three sentences summarizing key points:
Quantization causes limit cycles and overflow errors that change the filter behavior. Quantization noise can be modeled as white noise, but noise from correlated quantization is often neglected. Proper scaling is needed to balance dynamic range against roundoff error from quantization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
1. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
DOI : 10.5121/sipij.2013.4401 1
EXPONENTIAL WINDOW FAMILY
Kemal Avci1
and Arif Nacaroglu2
1
Department of Electrical and Electronics Engineering,
Abant Izzet Baysal University, Bolu, Turkey
avci_k@ibu.edu.tr
2
Department of Electrical and Electronics Engineering,
University of Gaziantep, Gaziantep, Turkey
arif1@gantep.edu.tr
ABSTRACT
In this paper we propose a new class of 2-parameter adjustable windows, namely Exponential window,
based on the exponential function [1,2]. The Exponential window is derived in the same way as Kaiser
window was derived, but our proposed window is more computationally efficient because in its time
domain function it has no power series expansion. First, the spectrum design equations for the Exponential
window are established, and the spectral comparisons are performed with Cosh, Kaiser and ultraspherical
windows. The proposed window is compared with Cosh and Kaiser windows, and the results show that for
the same window length and mainlobe width the Exponential window provides better sidelobe roll-off ratio
characteristic, which may be important for some applications, but worse ripple ratio. The second
comparison is performed with ultraspherical window for the same window length, mainlobe width and
sidelobe roll-off ratio and the results demonstrate that the Exponential window exhibits better ripple ratio
for the narrower mainlobe width and larger sidelobe roll-off ratio, but worse ripple ratio for the wider
mainlobe width and smaller sidelobe roll-off ratio.
KEYWORDS
Window function, Exponential window, Cosh Window, Kaiser window, Ultraspherical window
1. INTRODUCTION
Providing new window functions (or simply as windows) is in interest, because they are widely
used in digital signal processing applications, e.g., signal analysis and estimation, digital filter
design and speech processing [1-3]. In literature many windows have been proposed [4-16]. Since
the best window depends on the applications, they are known as suboptimal solutions.
Kaiser window [5] is a well-known two parameter flexible window and widely used for FIR filter
design and spectrum analysis applications. It performs good results because it achieves close
approximation to the discrete prolate spheroidal functions that have maximum energy
concentration in the mainlobe. With adjusting its two independent parameters, the window length
and the shape parameter, it can control the spectral parameters main lobe width and ripple ratio
for various applications.
Sidelobe roll-off ratio, which is important for some applications, is another window spectral
parameter to differentiate the performances of the windows. For beamforming applications, the
higher sidelobe roll-off ratio means that it can reject far end interferences better [11]. For the
design of nonrecursive digital filters, it reduces the far end attenuation for stopband energy [12],
and reducing the energy leak from one band to another for speech processing [17].
In terms of roll-off ratio characteristic the Kaiser window provides better sidelobe than the other
well-known two parameter adjustable windows such as Dolph-Chebyshev [4] and Saramaki [6].
2. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
2
Therefore, providing a window performing higher sidelobe roll-off characteristic than the Kaiser
window will be useful for some signal processing applications.
In this paper, a new window based on the exponential function is proposed to provide higher
sidelobe roll-off ratio than Kaiser window to be useful for some applications.
2. DERIVATION OF THE EXPONENTIAL WINDOW
In this section, a brief explanation about how to derive the proposed window function is given.
2.1. Windows
An N-length window, denoted by w(nT), is a time domain function which is nonzero for n≤│(N-
1)/2│and zero for otherwise. They are generally compared and classified in terms of their spectral
characteristics. The frequency spectrum of ( )w nT can be found by
( 1)/2
( )
1
( ) ( ) (0) 2 ( )cos
N
jwT j w
n
W e A w e w w nT wnTθ
−
=
= = + ∑ (1)
where T is the sample period. A typical window has a normalized amplitude spectrum in dB
range as in Figure 1.
w s/2-w s/2 -WR WR0
SL
S1
0 dB
Figure 1. A typical window’s normalized amplitude spectrum
Normalized spectrum in Fig.1 can be obtained from
10 max
( ) 20log ( ( ) / ( ) )jwT
NW e A w A w= (2)
The common spectral characteristic parameters to distinguish the windows performance are the
mainlobe width (wM), the ripple ratio (R) and the sidelobe roll-off ratio (S). From Figure 1, these
parameters can be defined as
wM = Two times half mainlobe width = 2wR
R = Maximum sidelobe amplitude in dB - Mainlobe amplitude in dB = S1
S = Maximum sidelobe amplitude in dB - Minimum sidelobe amplitude in dB = S1-SL
In the applications, it is desired for a window to have a smaller ripple ratio and a narrower
mainlobe width. But, this requirement is contradictory [3].
2.2. Kaiser Window
Kaiser window is defined in discrete time, as [3, 5]
3. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
3
2
0
0
2
( 1 )
1 1( )
( ) 2
0
k
k
k
n
I
N Nw n n
I
otherwise
α
α
− − − = ≤
(3)
where αk is the adjustable shape parameter, and I0(x) is the modified Bessel function of the first
kind of order zero, which is described by the following power series expansion as
2
0
1
1
( ) 1
! 2
k
k
x
I x
k
∞
=
= +
∑ (4)
While an approximation closed formula for the Kaiser window spectrum is defined [3], the exact
Kaiser spectrum can be obtained from Eq. (1). Note that T=1 is considered as the normalization
for the rest of paper.
As known from the fixed windows while the window length, N, increases the mainlobe width
decreases but ripple ratio remains generally constant. And, larger values of the shape parameter,
αk, result in a wider mainlobe width and a smaller ripple ratio.
2.3. Exponential Window
From Figure 2, it can be seen that exp(x) and Io(x) have the same shape characteristic.
0 0.5 1 1.5 2 2.5 3 3.5
0
1
2
3
4
5
6
7
8
x
I
0
(x)ande
x
I0
(x)
ex
Figure 2. The functions Io(x) and ex
Therefore, a new window, called “Exponential window” for this paper, can be proposed as
−
≤=
−
−
otherwise
N
n
e
enw
e
e
N
n
e
0
2
1)(
2
1
2
1
α
α
(5)
4. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
4
Like the Kaiser window, the Exponential window has two independent parameters, namely the
window length (N) and the adjustable shape parameter (αe). Figure 3 shows the time domain
characteristic of the exponential window for various values of the parameter αe with N = 51. It is
seen that αe = 0 corresponds to the rectangular window as in the case for the Kaiser window. For
larger values of αe, the Exponential window becomes to have a Gaussian shape.
-30 -20 -10 0 10 20 30
0
0.2
0.4
0.6
0.8
1
n
Amplitude
Alphae
= 0
Alphae
= 2
Alphae
= 4
Alphae
= 6
Alphae
= 8
Figure 3. Exponential window in time domain for αe = 0, 2, 4, 6, and 8 with N = 51
The exact spectrum for the Exponential window can be obtained from Eq. (1). Figure 4 shows the
effect of αe on the Exponential window spectrum for a fixed value of length N = 51. And, Table 1
summarizes the numerical data in Figure 4. As seen from the figure and table, an increase in αe
results in a wider mainlobe width and a smaller ripple ratio.
0.5 1 1.5 2 2.5 3
-90
-80
-70
-60
-50
-40
-30
-20
-10
0
Normalized Frequency (rad/sample)
Gain(dB)
Alphae
=0
Alphae
=2
Alphae
=4
Figure 3. Proposed window spectrum in dB for α = 0, 2, and 4 and N=51
5. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
5
Table 1. Spectral data for Exponential window
Window N α wR R S
Proposed-1 51 0 0.1 -13.25 20.9
Proposed-2 51 2 0.15 -21.73 32.95
Proposed-3 51 4 0.21 -31.84 44.54
3. SPECTRUM DESIGN EQUATIONS
It is important for some applications such as the spectrum analysis to have the window design
equations which define the window parameters in terms of the spectral parameters.
To obtain the spectrum design equations for the Exponential window, it is necessary to find the
relations between the window parameters and spectral parameters empirically. Figure 4 shows the
relation between αe and the ripple ratio for the window lengths N = 51 and 101.
-120 -100 -80 -60 -40 -20 0
0
2
4
6
8
10
12
14
16
18
Ripple Ratio, R (dB)
Alpha
e
N = 51
N = 101
Figure 4. Relation between αe and R for the Exponential window with N = 51 and 101
It is seen from Figure 4 that the window length parameter doesn’t affect the relation between the
adjustable parameter αe and the ripple ratio. Therefore, using the curve fitting method in
MATLAB, the first design equation for αe in terms of the ripple ratio can be obtained as
3 2
,
4 2
0 13.26
1.513 10 0.2809 3.398 50 13.26
1.085 10 0.1506 0.304 120 50
e Appr
R
x R R R
x R R R
α −
−
> −
= − − − − < ≤ −
− − − − ≤ ≤ −
(6)
The quadratic approximation model given by Eq. (6) for the adjustable parameter αe is plotted in
Figure 5. It is seen that the proposed model provides a good approximation for N = 101.
Moreover, the approximation error for the first design equation for N = 101 is plotted in Figure 6.
It is observed that the amplitude of deviations in the alpha is lower than 0.06 which corresponds
to very small error in the ripple ratio.
6. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
6
More accurate results can be obtained by restricting the range or using higher order
approximations, but the proposed model for the Exponential window is adequate for most
applications like the Kaiser model.
-120 -100 -80 -60 -40 -20 0
0
2
4
6
8
10
12
14
16
18
Ripple Ratio, R (dB)
Alpha
e
Exponential
Model
Figure 5. Approximated model for αe of the Exponential window with N = 101
-120 -110 -100 -90 -80 -70 -60 -50 -40 -30 -20
-0.1
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
Ripple Ratio, R (dB)
Alpha
e
-Alpha
e,App
Figure 6. Error curve of approximated αe versus R for N = 101
The second design equation is the relation between the window length and the ripple ratio. To
predict the window length for a given quantities R and wR, the normalized width parameter Dw =
2wR(N-1) is used [11]. The relation between Dw and R for the Exponential window with N = 51
and 101 is plotted in Figure 7.
7. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
7
10 20 30 40 50 60 70
-120
-100
-80
-60
-40
-20
0
Normalized Width, Dw
RippleRatio,R(dB)
N = 51
N = 101
Figure 7. Relation between Dw and R for the Exponential window with N = 51 and 101
It is seen from Figure 7 that as the ripple ratio becomes smaller the mainlobe width becomes
wider. Also, it is observed from the same figure that the window length has no effect on the
relation between the ripple ratio and normalized mainlobe width. By using the curve fitting
method, an approximate design relationship between the normalized width (Dw) and the ripple
ratio (R) can be established as
5 3 3 2
,
4 2
0 13.26
7.58 10 7.22 10 0.3566 4.312 50 13.26
1.297 10 0.5281 4.708 120 50
w Appr
R
D x R x R R R
x R R R
− −
−
> −
= − + − + − < ≤ −
− − + − ≤ ≤ −
(7)
The approximation model given by Eq. (7) for the normalized mainlobe width is plotted in Figure
8. It is seen that the proposed model provides a good approximation for N = 101.
10 20 30 40 50 60 70
-120
-100
-80
-60
-40
-20
0
Normalized Width, Dw
RippleRatio,R(dB)
Exponential
Model
Figure 8. Approximated model for Dw of the Exponential window with N = 101
8. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
8
-120 -110 -100 -90 -80 -70 -60 -50 -40 -30 -20
-0.1
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
(D
w
-D
w,Appr
)/D
w
(%)
Ripple Ratio, R (dB)
Figure 9. Relative error of approximated Dw for the Exponential window in percent versus R with N = 101
The relative error of approximated normalized width in percent versus the ripple ratio for N = 101
is plotted in Figure 9. The percentage error in the model changes between 0.065 and -0.086. This
error range satisfies the error criterion in [11] which states that the predicted error in the
normalized width must be smaller than 1 %.
An integer value of the window length N can be predicted from [11]
,
1
2
w Appr
R
D
N
w
≥ + (8)
Using the equations (6) through (8), an Exponential window can be designed for satisfying the
given prescribed values of the ripple ratio and mainlobe width.
In some applications [17], larger sidelobe roll-off ratio may be desired. Figure 10 shows the
change in the sidelobe-roll off ratio in terms of the normalized mainlobe width parameter for N =
51 and 101. From the figure it can be seen that the sidelobe roll off ratio becomes larger as
normalized width increases until one of the sidelobes is dropped due to higher value of alpha.
Unlike in the case of ripple ratio, a change in the window length affects significantly the sidelobe
roll-off ratio characteristic of the Exponential window.
10 20 30 40 50 60
20
30
40
50
60
70
80
Normalized Width, Dw
SidelobeRoll-offRatio,S(dB)
N = 51
N = 101
Figure 10. Relation between Dw and S for the Exponential window with N = 51 and 101
9. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
9
4. SPECTRUM COMPARISON EXAMPLES
4.1. Comparison with Kaiser and Cosh Windows
Figure 11 shows a general comparison of the Cosh window in a wide range with the Exponential
and Kaiser windows in terms of the ripple ratio versus normalized mainlobe width for N = 101.
The figure demonstrates that the Kaiser window provides smaller ripple ratio than the others for
the same mainlobe width. For the range Dw < 25, the Cosh window produces smaller ripple ratio
than the Exponential window. And, for the range 25 < Dw the Cosh and Exponential windows
perform the same ripple ratio characteristic.
Figure 11. Ripple ratio comparison between the Cosh, Exponential and Kaiser windows
for N = 101
The simulation results for the sidelobe roll-off ratio comparison is given for N = 101 in Figure 12.
It is seen that the Cosh window performs better than the Kaiser window but worse than the
Exponential window in terms of the sidelobe roll-off ratio for the same mainlobe width until one
sidelobe is lost where the peak values occur.
10 15 20 25 30 35 40 45 50 55 60
20
30
40
50
60
70
80
90
Normalized Width, D
w
SidelobeRoll-offRatio,S(dB)
Exponential Cosh Kaiser
Figure 12. Sidelobe roll-off ratio comparison between the Cosh, Exponential
and Kaiser windows for N = 101
10. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
10
4.2. Comparison with Ultraspherical Window
Two specific examples are given for the comparison between the Exponential and
ultraspherical windows. The first comparison example is performed for the narrower
mainlobe width and larger sidelobe roll-off ratio with N = 51.
0 0.5 1 1.5 2 2.5 3
-90
-80
-70
-60
-50
-40
-30
-20
-10
0
Normalized Frequency (rad/sample)
Gain(dB)
Exponential
Ultraspherical
Figure 13. Comparison of the proposed and ultraspherical windows for narrower mainlobe width and larger
sidelobe roll-off ratio for N=51
The simulation result given in Figure 13 and Table 2 which summarizes the figure shows that the
three-parameter ultraspherical window provides a better ripple ratio than the Exponential window
for the same window length, mainlobe width and sidelobe roll-off ratio. The ultraspherical
window parameters for this example are µ = 1.99999 and xµ = 1.00039.
Table 2. Data for the first comparison example
Window N wR S R
Exponential 51 0.164 37.81 -24.1
Ultraspherical 51 0.164 37.81 -23.02
The second comparison example is given for the wider mainlobe width and smaller sidelobe roll-
off ratio for N = 51. The simulation result given in Figure 14 and Table 3 shows that the
Exponential window provides a better ripple ratio than the ultraspherical window in this case. The
ultraspherical window parameters for this example are µ = 1.66635 and xµ = 1.00973.
11. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
11
0 0.5 1 1.5 2 2.5 3
-90
-80
-70
-60
-50
-40
-30
-20
-10
0
Normalized Frequency (rad/s)
Gain(dB)
Exponential
Ultraspherical
Figure 14. Comparison of the proposed and ultraspherical windows for wider mainlobe width and smaller
sidelobe roll-off ratio for N=51
From Figures 13 and 14, the ripples between the maximum and the minimum sidelobe amplitudes
can also be observed to be higher for the Exponential window.
Table 3. Data for the first comparison example
Window N wR S R
Proposed 51 0.31 32.48 -50.53
Ultraspherical 51 0.31 32.48 -51.75
5. CONCLUSIONS
In this paper, a new 2-parameter window family based on the exponential function has been
proposed. Since it’s derived using the exponential function, it has been called “Exponential
window” for this paper. First, the proposed window family has been introduced by giving its
derivation and mathematical definition. And then, its spectrum design equations using curve
fitting method in MATLAB have been obtained.
To demonstrate the performance of the proposed window, its spectral comparisons have been
performed with Cosh, Kaiser and ultraspherical windows. Comparison with Cosh and Kaiser
windows showed that the Exponential window provides better sidelobe roll-off ratio
characteristic, but presents worse ripple ratio for the same window length and mainlobe width. As
for the comparison with 3-paramater ultraspherical window, for the same the window length,
mainlobe width and sidelobe roll-off ratio parameters, the Exponential window presents better
ripple ratio for the narrower mainlobe width and larger sidelobe roll-off ratio, but exhibits worse
ripple ratio for the wider mainlobe width and smaller roll-off ratio.
12. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
12
REFERENCES
[1] Avci, Kemal (2008) Design of high-quality low order nonrecursive digital filters using the window
functions, PhD Thesis, Gaziantep University.
[2] Avci, Kemal & Nacaroglu, Arif (2008) “A new window based on exponential function” Proc. Of
IEEE Ph.D. Research in Microelectronics and Electronics (PRIME 2008). June, Istanbul, Turkey. pp
69-72.
[3] Antoniou, Andreas (2005) Digital signal processing: Signal, systems, and filters. New York: McGraw
Hill.
[4] Dolph, C. L. (1946) “A Current distribution for broadside arrays which optimizes the relationship
between beamwidth and side-lobe level” Proc. IRE, vol.34, June, pp 335-348.
[5] Kaiser, J.F. & Schafer, R.W (1980) “On the use of the Io-sinh window for spectrum analysis” IEEE
Trans. Acoustics, Speech, and Signal Processing, Vol. 28, No.1, pp 105-107.
[6] Saramaki, Tapio (1989) “A class of window functions with nearly minimum sidelobe energy for
designing FIR filters” in Proc. IEEE Int. Symp. Circuits and systems (ISCAS’89), Portland, Ore,
USA, vol.1, pp 359-362
[7] Ha, Y.H. and Pearce, J.A. (1989). “A new window and comparison to standard windows”. IEEE
Transactions on Acoustics, Speech, and Signal Processing. 37/2, pp 298-301.
[8] Adams, J.W. (1991) “A new optimal window”. IEEE Transactions on Signal Processing. 39/8, pp
1753-1769.
[9] Yang, S. and Ke, Y. (1992). “On the three-coefficient window family”. IEEE Transactions on Signal
Processing. 40/12, 3085-3088.
[10] Gautam, J.K., & Kumar, A. & and Saxena, R. (1996). “On the modified Bartlett-Hanning window
(family)”. IEEE Transactions on Signal Processing. 44/8, pp 2098-2102.
[11] Bergen, S.W.A. & Antoniou, Andreas (2004) “Design of ultraspherical window functions with
prescribed spectral characteristics” EURASIP Journal on Applied Signal Processing, 13, pp 2053-
2065.
[12] Sharma, S.N. & Saxena, R. & Saxena, S.C. (2004) “Design of FIR filter using variable window
families: A comparative study” J. Indian Inst. Sci., Sept.-Oct., 84, pp 155-161.
[13] Avci, Kemal & Nacaroglu, Arif (2009) “Cosh window family and its application to FIR filter design”
AEU-Int. J. Electronics and Communications, 63, pp 907-916.
[14] Avci, Kemal & Nacaroglu, Arif (2008) “Modification of Cosh Window Family”. Proc. of Third
International Conference on Information and Communication Technologies (ICTTA’08). April.
Damascus, Syria, pp 291-292.
[15] Avci, Kemal & Nacaroglu, Arif (2009) “An Efficient Study on the Modification of Kaiser Window”.
MTA Review. Vol. XIX, No.1
[16] Avci, Kemal (2013) “Performance Analysis of Kaiser-Hamming Window for Nonrecursive Digital
Filter Design” 21. Signal Processing and Communication Applications Conference (S U 2013), 24-26
April, Girne, North Cyprus, pp. 1-4.
[17] Jain, A & Saxena, R & Saxena, S.C. (2005) “A simple alias-free QMF system with near-perfect
reconstruction” J. Indian Ins. Sci., Jan-Feb, no.12, pp 1-10.
Authors
Kemal Avci was born in Adiyaman, Turkey in 1980. He received his B.S., M.S., and
Ph.D. degrees in Electrical and Electronics Engineering from University of
Gaziantep,Turkey in 2002, 2004, and 2008, respectively. Currently, he works as an
assistant professor in Electrical and Electronics Engineering in Abant Izzet Baysal
University. His research interests are audio signal processing, analog and digital filters
design.
Arif Nacaroglu was born in Istanbul, Turkey in 1958. He received his B.S., M.S., and
Ph.D. degrees in Electrical and Electronics Engineering from METU, Turkey in 1981,
1983, and 1990, respectively. Since1999, he has been a Professor in Electrical and
Electronics Engineering in University of Gaziantep. His main research area include
switched capacitor networks, time varying systems, analog and digital filters design.