This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
Numerical disperison analysis of sympletic and adi schemexingangahu
This document discusses numerical dispersion analysis of symplectic and alternating direction implicit (ADI) schemes for computational electromagnetic simulation. It presents Maxwell's equations as a Hamiltonian system that can be written as symplectic or ADI schemes by approximating the time evolution operator. Three high order spatial difference approximations - high order staggered difference, compact finite difference, and scaling function approximations - are analyzed to reduce numerical dispersion when combined with the symplectic and ADI schemes. The document derives unified dispersion relationships for the symplectic and ADI schemes with different spatial difference approximations, which can be used as a reference for simulating large scale electromagnetic problems.
Real interpolation method for transfer function approximation of distributed ...TELKOMNIKA JOURNAL
Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.
This document presents new certified optimal solutions found by the Charibde algorithm for six difficult benchmark optimization problems. Charibde combines an evolutionary algorithm and interval-based methods in a cooperative framework. It has achieved optimality proofs for five bound-constrained problems and one nonlinearly constrained problem. These problems are highly multimodal and some had not been solved before even with approximate methods. The document also compares Charibde's performance to other state-of-the-art solvers, showing it is highly competitive while providing reliable optimality proofs.
This document provides an overview of finite difference methods for solving partial differential equations. It introduces partial differential equations and various discretization methods including finite difference methods. It covers the basics of finite difference methods including Taylor series expansions, finite difference quotients, truncation error, explicit and implicit methods like the Crank-Nicolson method. It also discusses consistency, stability, and convergence of finite difference schemes. Finally, it applies these concepts to fluid flow equations and discusses conservative and transportive properties of finite difference formulations.
A review of automatic differentiationand its efficient implementationssuserfa7e73
Automatic differentiation is a powerful tool for automatically calculating derivatives of mathematical functions and algorithms. It works by expressing the target function as a sequence of elementary operations and then applying the chain rule to differentiate each operation. This can be done using either forward or reverse mode. Forward mode calculates how changes in inputs propagate through the function to influence the outputs, while reverse mode calculates how changes in outputs backpropagate to influence the inputs. Both modes require performing the computation twice - once for the forward pass and once for the derivative pass. Careful implementation is required to make automatic differentiation efficient in terms of speed and memory usage.
The document proposes a new method for approximating matrix finite impulse response (FIR) filters using lower order infinite impulse response (IIR) filters. The method is based on approximating descriptor systems and requires only standard linear algebraic routines. Both optimal and suboptimal cases are addressed in a unified treatment. The solution is derived using only the Markov parameters of the FIR filter and can be expressed in state-space or transfer function form. The effectiveness of the method is illustrated with a numerical example and additional applications are discussed.
This document discusses the optimal synthesis of four-bar linkages to generate a desired path. It describes using the gradient descent optimization algorithm to minimize the sum of squared errors between target precision points and points reached by the coupler link. The key constraints considered are Grashof's criterion, input link angle order, and transmission angle. A computer program implements gradient descent in MATLAB to determine the optimal four-bar linkage dimensions.
IRJET- Comparison for Max-Flow Min-Cut Algorithms for Optimal Assignment ProblemIRJET Journal
This document compares algorithms for optimally assigning individuals to centers to minimize total travel costs and distance, such as for a large-scale event. It formulates the problem as a graph assignment problem that can be solved using max-flow min-cut algorithms. It evaluates the Hungarian algorithm, Ford-Fulkerson algorithm, Edmonds-Karp algorithm, and Auction algorithm, implementing them to recommend the best for providing an optimized allocation in the shortest time.
Numerical disperison analysis of sympletic and adi schemexingangahu
This document discusses numerical dispersion analysis of symplectic and alternating direction implicit (ADI) schemes for computational electromagnetic simulation. It presents Maxwell's equations as a Hamiltonian system that can be written as symplectic or ADI schemes by approximating the time evolution operator. Three high order spatial difference approximations - high order staggered difference, compact finite difference, and scaling function approximations - are analyzed to reduce numerical dispersion when combined with the symplectic and ADI schemes. The document derives unified dispersion relationships for the symplectic and ADI schemes with different spatial difference approximations, which can be used as a reference for simulating large scale electromagnetic problems.
Real interpolation method for transfer function approximation of distributed ...TELKOMNIKA JOURNAL
Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.
This document presents new certified optimal solutions found by the Charibde algorithm for six difficult benchmark optimization problems. Charibde combines an evolutionary algorithm and interval-based methods in a cooperative framework. It has achieved optimality proofs for five bound-constrained problems and one nonlinearly constrained problem. These problems are highly multimodal and some had not been solved before even with approximate methods. The document also compares Charibde's performance to other state-of-the-art solvers, showing it is highly competitive while providing reliable optimality proofs.
This document provides an overview of finite difference methods for solving partial differential equations. It introduces partial differential equations and various discretization methods including finite difference methods. It covers the basics of finite difference methods including Taylor series expansions, finite difference quotients, truncation error, explicit and implicit methods like the Crank-Nicolson method. It also discusses consistency, stability, and convergence of finite difference schemes. Finally, it applies these concepts to fluid flow equations and discusses conservative and transportive properties of finite difference formulations.
A review of automatic differentiationand its efficient implementationssuserfa7e73
Automatic differentiation is a powerful tool for automatically calculating derivatives of mathematical functions and algorithms. It works by expressing the target function as a sequence of elementary operations and then applying the chain rule to differentiate each operation. This can be done using either forward or reverse mode. Forward mode calculates how changes in inputs propagate through the function to influence the outputs, while reverse mode calculates how changes in outputs backpropagate to influence the inputs. Both modes require performing the computation twice - once for the forward pass and once for the derivative pass. Careful implementation is required to make automatic differentiation efficient in terms of speed and memory usage.
The document proposes a new method for approximating matrix finite impulse response (FIR) filters using lower order infinite impulse response (IIR) filters. The method is based on approximating descriptor systems and requires only standard linear algebraic routines. Both optimal and suboptimal cases are addressed in a unified treatment. The solution is derived using only the Markov parameters of the FIR filter and can be expressed in state-space or transfer function form. The effectiveness of the method is illustrated with a numerical example and additional applications are discussed.
This document discusses the optimal synthesis of four-bar linkages to generate a desired path. It describes using the gradient descent optimization algorithm to minimize the sum of squared errors between target precision points and points reached by the coupler link. The key constraints considered are Grashof's criterion, input link angle order, and transmission angle. A computer program implements gradient descent in MATLAB to determine the optimal four-bar linkage dimensions.
IRJET- Comparison for Max-Flow Min-Cut Algorithms for Optimal Assignment ProblemIRJET Journal
This document compares algorithms for optimally assigning individuals to centers to minimize total travel costs and distance, such as for a large-scale event. It formulates the problem as a graph assignment problem that can be solved using max-flow min-cut algorithms. It evaluates the Hungarian algorithm, Ford-Fulkerson algorithm, Edmonds-Karp algorithm, and Auction algorithm, implementing them to recommend the best for providing an optimized allocation in the shortest time.
This document proposes a data compression technique using double transforms. It compares using the Walsh-Hadamard Transform (WHT), Discrete Cosine Transform (DCT), and a proposed Walsh-Cosine double transform on several contour data sets. The double transform approach applies the WHT, selects some spectral components, converts the spectrum to the DCT domain, and applies the inverse DCT. Experimental results show the double transform reduces multiplication operations compared to DCT alone while maintaining reconstruction error.
Numerical Solution of Diffusion Equation by Finite Difference Methodiosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
Mitigating Interference to GPS Operation Using Variable Forgetting Factor Bas...IJCNCJournal
In this paper, an interference method based on signal processing is proposed. The approach is based on
utilizing the maximum likelihood properties of the received signal. The approach is built on maximizing the
probability of the desired data. The GPS data, which is constructed using Binary Phase Shift Keying
(BPSK) modulation, is transmitted as “1’s” and as “0’s.” carried on 1575.42MHz carrier called the L1
frequency. The statistics of the GPS data and interference are utilized in terms of their distribution and
variance. The statistics are used to update (adaptively) the forgetting factor (Lambda) of the Recursive
Least Squares (RLS) filter. The proposed method is called Maximum Likelihood Variable Forgetting Factor
(ML VFF). The adaptive update takes on assigning lambda to the maximum of the probabilities of the
symbols based on the statistics mentioned.
The document presents four numerical methods for finding the nth root of a positive number: the Fixed-b method, Adaptive-b method, Simplified Adaptive-b method, and Newton-Raphson Improved (NRI) method. It analyzes the convergence properties and rate of convergence for each method. The Fixed-b and Adaptive-b methods introduce a parameter b that can improve convergence, and the Adaptive-b method allows b to adapt at each iteration. The NRI method is derived from the Simplified Adaptive-b method and shows faster convergence than Newton-Raphson in some cases.
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Fil...CSCJournals
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
1) The document analyzes optimum parameters for a geometric multigrid method for solving a two-dimensional thermoelasticity problem and Laplace equation numerically.
2) It studies the effect of grid size, inner iterations, and number of grids on computational time.
3) The results are compared between the two problems, single-grid methods, and other literature to determine if coupling equations impacts multigrid performance.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
Fpga implementation of optimal step size nlms algorithm and its performance a...eSAT Journals
Abstract The Normalized Least Mean Square error (NLMS) algorithm is most popular due to its simplicity. The conflicts of fast convergence and low excess mean square error associated with a fixed step size NLMS are solved by using an optimal step size NLMS algorithm. The main objective of this paper is to derive a new nonparametric algorithm to control the step size and also the theoretical performance analysis of the steady state behavior is presented in the paper. The simulation experiments are performed in Matlab. The simulation results show that the proposed algorithm as superior performance in Fast convergence rate, low error rate, and has superior performance in noise cancellation. Index Terms: Least Mean square algorithm (LMS), Normalized least mean square algorithm (NLMS)
Fpga implementation of optimal step size nlms algorithm and its performance a...eSAT Publishing House
This document discusses the implementation of an optimal step size normalized least mean square (NLMS) adaptive filtering algorithm on an FPGA. It begins with background on LMS and NLMS algorithms. It then derives the optimal step size NLMS algorithm and discusses its theoretical performance advantages over fixed step size LMS. The paper presents the FPGA implementation of the optimal step size NLMS algorithm for audio noise cancellation. Simulation results show the proposed algorithm has faster convergence, lower error rates, and superior noise cancellation performance compared to LMS and variable step size LMS algorithms. Area utilization and maximum operating frequency are also compared for the different implementations on an Altera Cyclone III FPGA.
This document presents two numerical methods for evaluating integrals involving the linear-shape function times the 3D Green's function on a plane triangle. The conventional method splits the integral into an analytical and numerical part, while the alternative method proposed evaluates the integral fully numerically. Numerical results show the alternative method is conceptually simpler, easier to implement, and achieves similar or better accuracy compared to the conventional method.
Dynamic programming is a technique for solving complex problems by breaking them down into simpler sub-problems. It involves storing solutions to sub-problems for later use, avoiding recomputing them. Examples where it can be applied include matrix chain multiplication and calculating Fibonacci numbers. For matrix chains, dynamic programming finds the optimal order for multiplying matrices with minimum computations. For Fibonacci numbers, it calculates values in linear time by storing previous solutions rather than exponentially recomputing them through recursion.
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...IRJET Journal
This document presents a study comparing training-based and blind equalization algorithms for quadrature amplitude modulation (QAM). It discusses key training algorithms like least mean squares (LMS) and recursive least squares (RLS), as well as blind algorithms like multi-modulus algorithm (MMA) and square contour algorithm (SCA). The document provides mathematical expressions to describe the algorithms and presents simulation results showing constellation diagrams and bit error rate comparisons between the algorithms on 64-QAM modulation. The results demonstrate that training-based methods like RLS can achieve slightly better performance than LMS at higher signal-to-noise ratios, while blind algorithms like MMA are effective for phase recovery.
The document proposes a new strategy called Maximum Distance Point Strategy (MDPS) to evaluate spherical form error from points measured by a Coordinate Measuring Machine (CMM). MDPS selects points that are maximally distant from each other to define a candidate sphere, unlike the commonly used Least Squares Method (LSM) which minimizes the sum of squared deviations of all points from the sphere. The results of MDPS are compared to LSM. MDPS is found to provide comparable or better results than LSM, especially when points are not uniformly distributed. The strategy aims to provide a more robust evaluation of spherical form error compared to existing methods.
MODIFIED ALPHA-ROOTING COLOR IMAGE ENHANCEMENT METHOD ON THE TWO-SIDE 2-DQUAT...mathsjournal
Color in an image is resolved to 3 or 4 color components and 2-Dimages of these components are stored in separate channels. Most of the color image enhancement algorithms are applied channel-by-channel on each image. But such a system of color image processing is not processing the original color. When a color image is represented as a quaternion image, processing is done in original colors. This paper proposes an implementation of the quaternion approach of enhancement algorithm for enhancing color images and is referred as the modified alpha-rooting by the two-dimensional quaternion discrete Fourier transform (2-D QDFT). Enhancement results of this proposed method are compared with the channel-by-channel image
enhancement by the 2-D DFT. Enhancements in color images are quantitatively measured by the color enhancement measure estimation (CEME), which allows for selecting optimum parameters for processing by thegenetic algorithm. Enhancement of color images by the quaternion based method allows for obtaining images which are closer to the genuine representation of the real original color.
An Efficient Multiplierless Transform algorithm for Video CodingCSCJournals
This paper presents an efficient algorithm to accelerate software video encoders/decoders by reducing the number of arithmetic operations for Discrete Cosine Transform (DCT). A multiplierless Ramanujan Ordered Number DCT (RDCT) is presented which computes the coefficients using shifts and addition operations only. The reduction in computational complexity has improved the performance of the video codec by almost 58% compared with the commonly used integer DCT. The results show that significant computation reduction can be achieved with negligible average peak signal-to-noise ratio (PSNR) degradation. The average structural similarity index matrix (SSIM) also ensures that the degradation due to the approximation is minimal.
This document discusses algorithms for computing the Kolmogorov-Smirnov distribution, which is used to measure goodness of fit between empirical and theoretical distributions. It describes existing algorithms that are either fast but unreliable or precise but slow. The authors propose a new algorithm that uses different approximations depending on sample size and test statistic value, to provide fast and reliable computation of both the distribution and its complement. Their C program implements this approach using multiple methods, including an exact but slow recursion formula and faster but less precise approximations for large samples.
Design of Second Order Digital Differentiator and Integrator Using Forward Di...inventionjournals
In this paper, the second order differentiator and integrator, design is investigated. Firstly, the forward difference formula is applied in numerical differentiation for deriving the transfer function of second order differentiator and integrator. Thereafter, the Richardson extrapolation is used for generating the high accuracy results, while using low order formulas. Further, the conventional Lagrange FIR fractional delay filter is applied directly for implementation of the second order differentiator, design. Finally, the effectiveness of this new design approach is illustrated by using several numerical examples.
This document proposes a data compression technique using double transforms. It compares using the Walsh-Hadamard Transform (WHT), Discrete Cosine Transform (DCT), and a proposed Walsh-Cosine double transform on several contour data sets. The double transform approach applies the WHT, selects some spectral components, converts the spectrum to the DCT domain, and applies the inverse DCT. Experimental results show the double transform reduces multiplication operations compared to DCT alone while maintaining reconstruction error.
Numerical Solution of Diffusion Equation by Finite Difference Methodiosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
Mitigating Interference to GPS Operation Using Variable Forgetting Factor Bas...IJCNCJournal
In this paper, an interference method based on signal processing is proposed. The approach is based on
utilizing the maximum likelihood properties of the received signal. The approach is built on maximizing the
probability of the desired data. The GPS data, which is constructed using Binary Phase Shift Keying
(BPSK) modulation, is transmitted as “1’s” and as “0’s.” carried on 1575.42MHz carrier called the L1
frequency. The statistics of the GPS data and interference are utilized in terms of their distribution and
variance. The statistics are used to update (adaptively) the forgetting factor (Lambda) of the Recursive
Least Squares (RLS) filter. The proposed method is called Maximum Likelihood Variable Forgetting Factor
(ML VFF). The adaptive update takes on assigning lambda to the maximum of the probabilities of the
symbols based on the statistics mentioned.
The document presents four numerical methods for finding the nth root of a positive number: the Fixed-b method, Adaptive-b method, Simplified Adaptive-b method, and Newton-Raphson Improved (NRI) method. It analyzes the convergence properties and rate of convergence for each method. The Fixed-b and Adaptive-b methods introduce a parameter b that can improve convergence, and the Adaptive-b method allows b to adapt at each iteration. The NRI method is derived from the Simplified Adaptive-b method and shows faster convergence than Newton-Raphson in some cases.
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Fil...CSCJournals
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
1) The document analyzes optimum parameters for a geometric multigrid method for solving a two-dimensional thermoelasticity problem and Laplace equation numerically.
2) It studies the effect of grid size, inner iterations, and number of grids on computational time.
3) The results are compared between the two problems, single-grid methods, and other literature to determine if coupling equations impacts multigrid performance.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
Fpga implementation of optimal step size nlms algorithm and its performance a...eSAT Journals
Abstract The Normalized Least Mean Square error (NLMS) algorithm is most popular due to its simplicity. The conflicts of fast convergence and low excess mean square error associated with a fixed step size NLMS are solved by using an optimal step size NLMS algorithm. The main objective of this paper is to derive a new nonparametric algorithm to control the step size and also the theoretical performance analysis of the steady state behavior is presented in the paper. The simulation experiments are performed in Matlab. The simulation results show that the proposed algorithm as superior performance in Fast convergence rate, low error rate, and has superior performance in noise cancellation. Index Terms: Least Mean square algorithm (LMS), Normalized least mean square algorithm (NLMS)
Fpga implementation of optimal step size nlms algorithm and its performance a...eSAT Publishing House
This document discusses the implementation of an optimal step size normalized least mean square (NLMS) adaptive filtering algorithm on an FPGA. It begins with background on LMS and NLMS algorithms. It then derives the optimal step size NLMS algorithm and discusses its theoretical performance advantages over fixed step size LMS. The paper presents the FPGA implementation of the optimal step size NLMS algorithm for audio noise cancellation. Simulation results show the proposed algorithm has faster convergence, lower error rates, and superior noise cancellation performance compared to LMS and variable step size LMS algorithms. Area utilization and maximum operating frequency are also compared for the different implementations on an Altera Cyclone III FPGA.
This document presents two numerical methods for evaluating integrals involving the linear-shape function times the 3D Green's function on a plane triangle. The conventional method splits the integral into an analytical and numerical part, while the alternative method proposed evaluates the integral fully numerically. Numerical results show the alternative method is conceptually simpler, easier to implement, and achieves similar or better accuracy compared to the conventional method.
Dynamic programming is a technique for solving complex problems by breaking them down into simpler sub-problems. It involves storing solutions to sub-problems for later use, avoiding recomputing them. Examples where it can be applied include matrix chain multiplication and calculating Fibonacci numbers. For matrix chains, dynamic programming finds the optimal order for multiplying matrices with minimum computations. For Fibonacci numbers, it calculates values in linear time by storing previous solutions rather than exponentially recomputing them through recursion.
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...IRJET Journal
This document presents a study comparing training-based and blind equalization algorithms for quadrature amplitude modulation (QAM). It discusses key training algorithms like least mean squares (LMS) and recursive least squares (RLS), as well as blind algorithms like multi-modulus algorithm (MMA) and square contour algorithm (SCA). The document provides mathematical expressions to describe the algorithms and presents simulation results showing constellation diagrams and bit error rate comparisons between the algorithms on 64-QAM modulation. The results demonstrate that training-based methods like RLS can achieve slightly better performance than LMS at higher signal-to-noise ratios, while blind algorithms like MMA are effective for phase recovery.
The document proposes a new strategy called Maximum Distance Point Strategy (MDPS) to evaluate spherical form error from points measured by a Coordinate Measuring Machine (CMM). MDPS selects points that are maximally distant from each other to define a candidate sphere, unlike the commonly used Least Squares Method (LSM) which minimizes the sum of squared deviations of all points from the sphere. The results of MDPS are compared to LSM. MDPS is found to provide comparable or better results than LSM, especially when points are not uniformly distributed. The strategy aims to provide a more robust evaluation of spherical form error compared to existing methods.
MODIFIED ALPHA-ROOTING COLOR IMAGE ENHANCEMENT METHOD ON THE TWO-SIDE 2-DQUAT...mathsjournal
Color in an image is resolved to 3 or 4 color components and 2-Dimages of these components are stored in separate channels. Most of the color image enhancement algorithms are applied channel-by-channel on each image. But such a system of color image processing is not processing the original color. When a color image is represented as a quaternion image, processing is done in original colors. This paper proposes an implementation of the quaternion approach of enhancement algorithm for enhancing color images and is referred as the modified alpha-rooting by the two-dimensional quaternion discrete Fourier transform (2-D QDFT). Enhancement results of this proposed method are compared with the channel-by-channel image
enhancement by the 2-D DFT. Enhancements in color images are quantitatively measured by the color enhancement measure estimation (CEME), which allows for selecting optimum parameters for processing by thegenetic algorithm. Enhancement of color images by the quaternion based method allows for obtaining images which are closer to the genuine representation of the real original color.
An Efficient Multiplierless Transform algorithm for Video CodingCSCJournals
This paper presents an efficient algorithm to accelerate software video encoders/decoders by reducing the number of arithmetic operations for Discrete Cosine Transform (DCT). A multiplierless Ramanujan Ordered Number DCT (RDCT) is presented which computes the coefficients using shifts and addition operations only. The reduction in computational complexity has improved the performance of the video codec by almost 58% compared with the commonly used integer DCT. The results show that significant computation reduction can be achieved with negligible average peak signal-to-noise ratio (PSNR) degradation. The average structural similarity index matrix (SSIM) also ensures that the degradation due to the approximation is minimal.
This document discusses algorithms for computing the Kolmogorov-Smirnov distribution, which is used to measure goodness of fit between empirical and theoretical distributions. It describes existing algorithms that are either fast but unreliable or precise but slow. The authors propose a new algorithm that uses different approximations depending on sample size and test statistic value, to provide fast and reliable computation of both the distribution and its complement. Their C program implements this approach using multiple methods, including an exact but slow recursion formula and faster but less precise approximations for large samples.
Design of Second Order Digital Differentiator and Integrator Using Forward Di...inventionjournals
In this paper, the second order differentiator and integrator, design is investigated. Firstly, the forward difference formula is applied in numerical differentiation for deriving the transfer function of second order differentiator and integrator. Thereafter, the Richardson extrapolation is used for generating the high accuracy results, while using low order formulas. Further, the conventional Lagrange FIR fractional delay filter is applied directly for implementation of the second order differentiator, design. Finally, the effectiveness of this new design approach is illustrated by using several numerical examples.
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationTELKOMNIKA JOURNAL
Recently, sophisticated segmentation techniques, such as level set method, which using valid
numerical calculation methods to process the evolution of the curve by solving linear or nonlinear elliptic
equations to divide the image availably, has become being more popular and effective. In Local Binary
Fitting (LBF) algorithm, a simple contour is initialized in an image and then the steepest-descent algorithm
is employed to constrain it to minimize the fitting energy functional. Hence, the initial position of the contour
is difficult or impossible to be well chosen for the final performance. To overcoming this drawback, this
work treats the energy fitting problem as a meta-heuristic optimization algorithm and imports a varietal
particle swarm optimization (PSO) method into the inner optimization process. The experimental results of
segmentations on medical images show that the proposed method is not only effective to both simple and
complex medical images with adequate stochastic effects, but also shows the accuracy and high
efficiency.
Image Compression using DPCM with LMS AlgorithmIRJET Journal
1. The document describes an image compression technique using Differential Pulse Code Modulation (DPCM) with a Least Mean Squares (LMS) adaptive prediction filter.
2. DPCM transmits the difference between predicted and actual pixel values (prediction errors) rather than raw pixel values, aiming to reduce redundancy. The LMS filter adaptively updates its prediction coefficients to minimize prediction errors.
3. The technique was tested compressing a 256x256 image with 1, 2, and 3-bit quantizers. Compression performance was evaluated by measuring average squared distortion and prediction mean squared error for different bit rates. Compression improved with more bits, lowering distortion and prediction error.
The document reports on the results of three image processing projects. The first project implemented Lloyd-Max quantization to reduce image file sizes and Retinex theory to compensate for uneven illumination. The second project used principal component analysis to compute eigenfaces for face recognition. The third project performed linear discriminant analysis and tensor-based linear discriminant analysis for binary classification and visual object recognition. Illumination compensation subtracted an estimated illumination plane from image intensities to reduce shadows. Eigenfaces were the principal components of a training set of face images. Tensor-based linear discriminant analysis treated images as higher-order tensors to outperform conventional LDA.
Spatial domain filtering and intensity transformations are techniques used in image processing. Spatial domain refers to the pixels that make up an image. Spatial domain techniques operate directly on pixels by applying operators to pixels and their neighbors. Common operators include averaging, median filtering, and contrast adjustments. Spatial filtering techniques include smoothing to reduce noise and sharpening to enhance edges through differentiation. Intensity transformations map input pixel values to output values using functions like logarithms, power laws, and piecewise linear approximations to modify image contrast and highlight certain intensity ranges.
FPGA Implementation of 2-D DCT & DWT Engines for Vision Based Tracking of Dyn...IJERA Editor
Real time motion estimation for tracking is a challenging task. Several techniques can transform an image into frequency domain, such as DCT, DFT and wavelet transform. Direct implementation of 2-D DCT takes N^4 multiplications for an N x N image which is impractical. The proposed architecture for implementation of 2-D DCT uses look up tables. They are used to store pre-computed vector products that completely eliminate the multiplier. This makes the architecture highly time efficient, and the routing delay and power consumption is also reduced significantly. Another approach, 2-D discrete wavelet transform based motion estimation (DWT-ME) provides substantial improvements in quality and area. The proposed architecture uses Haar wavelet transform for motion estimation. In this paper, we present the comparison of the performance of discrete cosine transform, discrete wavelet transform for implementation in motion estimation.
This paper compares two logarithmic coding techniques for adaptive beamforming in wireless communications. One technique uses a direct lookup table conversion, while the other uses linear interpolation with a smaller lookup table and multiplier. Matlab simulations show that both logarithmic techniques cause small errors for address precisions above 9 bits for direct conversion and 5 bits for interpolation conversion. The results indicate the logarithmic methods provide better error performance than a fixed-point implementation, while requiring less hardware cost.
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
This document compares different techniques for texture classification, including wavelet transforms and co-occurrence matrices. It finds that the Haar wavelet technique is the most efficient in terms of time complexity and classification accuracy, except when images are rotated. The co-occurrence matrix method has higher time requirements but excellent classification results, except for rotated images where accuracy is greatly reduced due to its dependence on pixel values. Overall, the Haar wavelet proves to be the best method for texture classification based on the performance assessment parameters of time complexity and classification accuracy.
Discrete-wavelet-transform recursive inverse algorithm using second-order est...TELKOMNIKA JOURNAL
The recursive-least-squares (RLS) algorithm was introduced as an alternative to LMS algorithm with enhanced performance. Computational complexity and instability in updating the autocolleltion matrix are some of the drawbacks of the RLS algorithm that were among the reasons for the intrduction of the second-order recursive inverse (RI) adaptive algorithm. The 2nd order RI adaptive algorithm suffered from low convergence rate in certain scenarios that required a relatively small initial step-size. In this paper, we propose a newsecond-order RI algorithm that projects the input signal to a new domain namely discrete-wavelet-transform (DWT) as pre step before performing the algorithm. This transformation overcomes the low convergence rate of the second-order RI algorithm by reducing the self-correlation of the input signal in the mentioned scenatios. Expeirments are conducted using the noise cancellation setting. The performance of the proposed algorithm is compared to those of the RI, original second-order RI and RLS algorithms in different Gaussian and impulsive noise environments. Simulations demonstrate the superiority of the proposed algorithm in terms of convergence rate comparedto those algorithms.
Introducing New Parameters to Compare the Accuracy and Reliability of Mean-Sh...sipij
Mean shift algorithms are among the most functional tracking methods which are accurate and have almost simple computation. Different versions of this algorithm are developed which are differ in template updating and their window sizes. To measure the reliability and accuracy of these methods one should normally rely on visual results or number of iteration. In this paper we introduce two new parameters which can be used to compare the algorithms especially when their results are close to each other.
This document summarizes an exercise involving calculating the inverse of square matrices in three ways: analytically, using LU decomposition, and singular value decomposition. It finds that analytical calculation becomes impractically slow for matrices larger than order 13, while LU decomposition and SVD using GNU Scientific Library functions can calculate the inverse of matrices up to order 350 in under 20 seconds. SVD is found to be slightly more efficient than LU decomposition for higher order matrices. Accuracy is also compared when the input matrix is close to singular, finding SVD returns the most accurate inverse.
The document presents a new hybrid method for performing binary floating point multiplication that aims to improve speed. It combines the Dadda multiplier and modified radix-8 Booth multiplier algorithms. The hybrid method multiplies the mantissas using this approach, replacing existing multiplier designs like carry save multipliers. It achieves a maximum frequency of 555MHz, faster than existing floating point multipliers. The design is implemented in Verilog HDL and tested using Quartus II software.
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document presents a methodology for designing low error fixed width adaptive multipliers. It begins by discussing Baugh-Wooley multiplication, which produces a 2n-bit output from n-bit inputs. For digital signal processing applications, only an n-bit output is required. Direct truncation introduces errors. The methodology proposes using a generalized index and binary thresholding to derive an error-compensation bias to reduce truncation errors. It defines different types of binary thresholding and analyzes statistics to determine average bias values. The proposed fixed width multiplier is intended to have better error performance than other existing multiplier structures.
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
This document compares the accuracy of determining volumes using close range photogrammetry versus traditional methods. It presents a case study where the volume of a test field was calculated using both approaches. Using traditional methods with 425 control points, the volume was calculated as 221475.14 m3 using trapezoidal rules, 221424.52 m3 using Simpson's rule, and 221484.05 m3 using Simpson's 3/8 rule. Using close range photogrammetry with 42 control points and 574 generated points, the volume was calculated as 215310.60 m3 using trapezoidal rules, 215300.43 m3 using Simpson's rule, and 215304.12 m3 using Simpson's 3
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
Low power cost rns comparison via partitioning the dynamic rangeNexgen Technology
2016 ieee project ,2016-2017 ieee projects, application projects, best ieee projects, bulk final year projects, bulk ieee projects ,diploma projects electrical engineering electrical engineering projects ,final year application projects, final year csc projects, final year cse project, final year it projects ,final year project, final year projects, final year projects in chennai ,final year projects in coimabtore, final year projects in hyderabad, final year projects in pondicherry final year projects in rajasthan ,ieee based projects for ece, ieee final year projects, ieee master, ieee project, ieee project 2015 ,ieee project 2016, ieee project centers in pondicherry ,ieee project for eee, ieee projects, ieee projects ,2015-2016 ieee projects, 2016-2017 ieee projects, cse ieee projects, cse 2015 ieee projects, cse 2016 ieee projects for cse ,ieee projects for it, ieee projects in bangalore, ieee projects in chennai, ieee projects in coimbatore, ieee projects in hyderabad ,ieee projects in madurai ,ieee projects in maharashtra ,ieee projects in mumbai, ieee projects in odisha, ieee projects in orissa, ieee projects in pondicherry, ieee projects in pondy ,ieee projects in pune, ieee projects in uttarakhand, ieee projects titles, 2015-2016 latest projects for eee, NEXGEN TECHNOLOGY mtech ieee projects mtech projects 2016-2017 mtech projects in chennai mtech, projects in cuddalore ,mtech projects in neyveli, mtech projects in panruti, mtech projects in pondicherry, mtech projects in tindivanam, mtech projects in villupuram, online ieee projects ,phd guidance, project for engineering ,project titles for ece
Similar to A Novel Cosine Approximation for High-Speed Evaluation of DCT (20)
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Chapter wise All Notes of First year Basic Civil Engineering.pptx
A Novel Cosine Approximation for High-Speed Evaluation of DCT
1. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 539
A Novel Cosine Approximation for High-Speed Evaluation
of DCT
Geetha K.S geethakomandur@gmail.com
Assistant Professor, Dept of E&CE
R.V.College of Engineering,
Bangalore-59, India
M.UttaraKumari dr.uttarakumari@gmail.com
Professor, Dean of P.G.Studies
Dept of E&CE
R.V.College of Engineering,
Bangalore-59, India
Abstract
This article presents a novel cosine approximation for high-speed evaluation of
DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The
proposed method uses the Ramanujan ordered number to convert the angles of
the cosine function to integers. Evaluation of these angles is by using a 4th
degree polynomial that approximates the cosine function with error of
approximation in the order of 10-3
. The evaluation of the cosine function is
explained through the computation of the DCT coefficients. High-speed
evaluation at the algorithmic level is measured in terms of the computational
complexity of the algorithm. The proposed algorithm of cosine approximation
increases the overhead on the number of adders by 13.6%. This algorithm
avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N
+ 1 addition operations to evaluate an N-point DCT coefficients thereby
improving the speed of computation of the coefficients
Keywords: Cosine Approximation, High-Speed Evaluation, DCT, Ramanujan Ordered Number.
1. INTRODUCTION
High-speed approximation to the cosine functions are often used in digital signal and image
processing or in digital control. With the ever increasing complexity of processing systems and
the increasing demands on the data rates and the quality of service, efficient calculation of the
cosine function with a high degree of accuracy is vital. Several methods have been proposed to
evaluate these functions [1]. When the input/output precision is relatively low(less than 24 bits),
table and addition methods are often employed [2, 3]. Efficient methods on small multipliers and
tables have been proposed in [4]. Method based on the small look up table and low-degree
polynomial approximations with sparse coefficients are discussed in [5, 6].
Recently, there has been increasing interest in approximating a given floating-point transform
using only very large scale integration-friendly binary, multiplierless coefficients. Since only binary
coefficients are needed, the resulting transform approximation is multiplierless, and the overall
complexity of hardware implementation can be measured in terms of the total number of adders
and/or shifters required in the implementation. Normally the multiplierless approximation are
discussed for implementing the discrete cosine transform (DCT) which is widely used in
image/video coding applications. The fast bi-orthogonal Binary DCT (BinDCT) [7] and Integer
DCT (IntDCT) [8, 9] belong to a class of multiplierless transforms which compute the coefficients
2. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 540
of the form ωn. They compute the integer to integer mapping of coefficients through the lifting
matrices. The performances of these transforms depend upon the lifting schemes used and the
round off functions. In general, these algorithms require the approximation of the decomposed
DCT transformation matrices by proper diagonalisation. Thus, the complexity is shifted to the
techniques used for decomposition.
In this work, we show that using Ramanujan ordered numbers; it is possible to evaluate the
cosine functions using only shifts and addition operations. The complete operator avoids the use
of floating-point multiplication used for evaluation of DCT coefficients, thus making the algorithm
completely multiplierless. Computation of DCT coefficients involves evaluation of cosine angles of
multiples of 2π/N. If N is chosen such that it could be represented as 2
-l
+ 2
-m
, where l and m are
integers, then the trigonometric functions can be evaluated recursively by simple shift and
addition operations.
2. RAMANUJAN ORDERED NUMBERS
Ramanujan ordered Numbers are related to π and integers which are powers of 2. Ramanujan
ordered Number of degree-1 was used in [10,11] to compute the Discrete Fourier Transform. The
accuracy of the transform can be further improved by using the Ramanujan ordered number of
degree-2. This is more evident in terms of the errors involved in the approximation.
2.1 Definition : Ramanujan ordered Number of degree-1
Ramanujan ordered Numbers of degree-1
( )1 aℜ
are defined as follows:
( )
( )
( )1 1
1
2
2 a
a where l a
l a
π −
ℜ = =
(1)
a is a non-negative integer and
[ ]⋅
is a round off function. The numbers could be computed by
simple binary shifts. Consider the binary expansion of π which is 11.00100100001111… If a is
chosen as 2, then
2
1(2) 2ι −
=
,and 1(2) [11001.001000.......] 11001ℜ = = . i.e., 1(2)ℜ is equal to 25.
Likewise 1(4)ℜ
=101. Thus the right shifts of the decimal point (a+1) time yields 1( )aℜ .
Ramanujan used these numbers to approximate the value of π. Let this approximated value be ˆπ
and let the relative error of approximation be ε, then
( ) ( ) ( )1
1
ˆ ˆ 1
2
a aπ ι π π= ℜ = + ∈
(2)
These errors could be used to evaluate the degree of accuracy obtained in computation of DCT
coefficients.
TABLE 1: Ramanujan ordered Number of Degree-1.
Table 1 clearly shows the numbers which can be represented as Ramanujan ordered -numbers
of order-1. Normally, the digital signal processing applications requires the numbers to be power
of 2; hence higher degree numbers are required for all practical applications.
( )a ( )aℜ ˆπ Upper bound of
error
0 6 3.0 4.507x10-2
1 13 3.25 3.4507x10-3
3 50 3.125 5.287x10
-3
3. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 541
2.2 Definition: Ramanujan ordered Number of degree-2
The Ramanujan ordered Number of degree-2 [11] are defined such that 2 Nπ is approximated by
sum or difference of two numbers which are negative powers of 2. Thus Ramanujan Numbers of
degree-2 are,
( )
( )2
2
2
, 1,2
,i
l m for i
l m
ι
π
ι
ℜ = =
(3)
( )21 , 2 2 0l m
l m for m lι − −
= + > ≥
( )22 , 2 2 ( 1) 0l m
l m for m lι − −
= − − > ≥
(4)
Where l and m are integers, Hence
( ) ( )21 213,5 40 1,3 10ℜ = ℜ =
Ramanujan ordered Number of degree-2 and their properties are listed in the table 2 below.
TABLE 2: Ramanujan ordered Number of Degree-2.
The accuracy of the numbers increase with the increase in the degree of the Ramanujan ordered
numbers at the expense of additional shifts and additions. State-of-art technologies in Image
processing uses the block processing techniques for applications like image compression or
image enhancement. The standardized image block size is8 8× , which provides us an
opportunity to use Ramanujan ordered numbers to reduce the complexity of the algorithms. Table
III shows the higher degree Ramanujan ordered Numbers and their accuracies.
TABLE 3: Ramanujan ordered Number of Higher Degree.
Table 3 shows that the error of approximation decreases with the increase in the degree of
Ramanujan ordered numbers, but the computational complexity also increases. Hence the choice
of Ramanujan ordered number of degree-2 is best validated for the accuracy and the
computational overhead in computation of the cosine functions.
( ),l m ( ),l mℜ ˆπ Upper bound of
error
0,2 5 3.125 5.28x10
-3
1,2 8 3.0 4.507x10-2
4,5 67 3.140 5.067x10
-4
( ), , .....l m p ( ), , ....l m pℜ ˆπ Upper bound of
error
Computational
complexity
Shifts Adds
1,2 8 3. 0 4.507x10-2
2 1
1,2,5 8 3.125 5.28x10-3
3 2
1,2,5,8 8 3.1406 3.08x10-4
4 3
4. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 542
3. COSINE APPROXIMATION USING RAMANUJAN ORDERED NUMBER
Method of computing the cosine function is to find a polynomial that allows us to approximate the
cosine function locally. A polynomial is a good approximation to the function near some
point, x a= , if the derivatives of the polynomial at the point are equal to the derivatives of the
cosine curve at that point. Thus the higher the degree of the polynomial, the better is the
approximation. If np
denotes the nth
polynomial about x a= for a function f, and if we
approximate ( )f x by ( )p x at a point x, then the difference ( ) ( ) ( )nR x f x p x= −
is called the nth
remainder for f about x a= . The plot of the 4th
order approximation along with the difference plot
is as shown in figure 1.
FIGURE 1: Plot of f(x)=cos(x) and the remainder function .
Figure 2 indicates that the cosine approximation at various angles with 4
th
degree polynomial is
almost close with the actual values. The accuracy obtained at various degrees of the polynomial
is compared in table 4. Thus we choose the 4
th
degree polynomial for the cosine approximation.
FIGURE 2: Expanded version of cosine approximation.
5. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 543
TABLE 4: Cosine approximation in comparison with the Actual value
The evaluation of the cosine function using the polynomial approximation is explained through the
application of computing the Discrete Cosine Transform coefficients which uses the cosine as its
basis function. Discrete Cosine Transforms (DCT) is widely used in the area of signal processing,
particularly for transform coding of images. Computation of DCT coefficients involves evaluation
of cosine angles of multiples of 2π/N.
The input sequence { }nx
of size N, is transformed as, { }ky
. The transformation may be either DFT
or DCT. The DCT defined as [12]
( )1
0
2 12
cos
2
0,1... 1
1
0
2
1
N
k
k n
n
k
n
y x k
N N
for k N
for k
otherwise
ε
π
ε
−
=
+
=
= −
=
=
∑
(5)
Neglecting the scaling factors, the DCT kernel could be simplified as
( )22
cos 2 2 1
0 1, 0 1
nc n k
N
for n N k N
π −
= +
≤ ≤ − ≤ ≤ − (6)
DCT coefficients are computed by evaluating the sequences of type
{ }cos(2 / ), 0,1,2...( 1), Rn nc ׀c p n N n N pπ= = − ∈
(7)
where R is the set of real numbers. These computations are done via a Chebyshev-type of
recursion. Let us define
( ) ( ){ }, cos 2w n nM p w w p n Mπ= =
(8)
0,1,2,.... , Rn p= Ψ ∈
Function Degree-2 Degree-4 Actual value
( )cos 16π 0.9807234 0.980785 0.980785
( )cos 3 16π 0.826510 0.831527 0.8314696
( )cos 5 16π 0.518085 0.55679 0.55557
6. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 544
1 ,
4
M
M Nβ
Ψ = − =
(9)
1 4
2 2 4
4
if divides N
if divides N but not
otherwise
β
=
(10)
The use of β facilitates the computation of W (M, p) by considering cosine values from the first
quadrant of the circle. Then the sequence {cn} can be evaluating recursively{ }nW
.
The W (M, p) sequence is estimated as follows. Let us define
22
2
N
π −
as x and hence
( )cosnc nx=
.
We approximate x by ˆx equal to 2 2l m− −
+ with l and m being non-negative integers. Thus, we
approximate nc ’s by nt
’s, where α is equal to
2 4
ˆ ˆ2 4!x x− . Since ˆx is a Ramanujan ordered
Number of degree-2, α is of the form2 2c d− −
+ , where c and d are integers. Then
( )
( )
( )
0
1
1 1
1
2 1
1,2, , 1
n n n
t p
t p
t t t
n
α
α+ −
=
= −
= − −
= Ψ −
M
L
(11)
The above equations show that the recursive equations can be computed by simple shift and
addition operations.
4. SIMULATION RESULTS
To evaluate the performance of the proposed cosine approximation, the following cosine angles
were simulated in matlab. The actual value is the angles evaluated using the inbuilt COS function
and the values obtained from the proposed approximation are tabulated in Table 5. The deviation
of the approximated results from the actual value is tabulated as error in table 5. The error due to
approximation is of the order of 10-3
which is acceptable for image coding applications.
TABLE 5: Comparison of the proposed cosine approximation
The cosine approximation being multiplierless using Ramanujan ordered Number proved to be
more efficient and simple when being applied to image coding. We tested the efficacy of the
proposed algorithm by replacing the existing floating-point DCT/IDCT block from the JPEG
encoder/decoder. To prove the efficiency of the proposed algorithm standard test images like
Function Actual value Proposed
Approximation
Error
( )cos 16π 0.980785 0.9810 -2.15×10-4
( )cos 3 16π 0.8314696 0.83312 -1.6504×10-3
( )cos 5 16π 0.55557 0.5571 -1.53×10
-3
7. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 545
Cameraman and Lena images were considered for JPEG encoder/decoder. The floating-point
DCT block evaluates 2-D DCT on the 8x8 block of the image by implementing the direct function
as given in equation (5). Most commonly used algorithms [13, 14] use lookup tables to access the
cosine values to compute the DCT coefficients and hence require floating-point multipliers in
hardware implementation of the algorithm. Table 6 shows the comparison of the algorithms in
terms of number of adders and table 7 gives the comparison in terms of shifters/multipliers
required for the hardware implementation. The computational complexity is reduced by
completely avoiding the floating-point multiplications and is replaced by N/2log2N shift operations.
However the number of addition operations required to compute N DCT coefficients is (3N/2 log2
N)- N + 1 which increases by 13.6% when compared with the number of adders required for the
floating-point DCT.
TABLE 6: Computational Complexity in terms of additions
TABLE 7: Computational Complexity in terms of Multiplications
The performance of the proposed algorithm is compared with the existing floating-point DCT in
terms of PSNR and the Compression Ratio achieved and the results are tabulated in Table 8.
The PSNR is computed using the MSE as the error metric.
( ) ( )
( )
2'
1 1
10
1
, ,
20 log 255
M N
m n
MSE I m n I m n
MN
PSNR MSE
= =
= −
= ∗
∑∑
(12)
Compression Ratio =Uncompressed Image size / Compressed image size.
Where N x M is the size of the image, I(m,n) is the original image and I`(m,n) is the reconstructed
image.
The results show that the reduction in PSNR for color images like Football is around 10% and
reduction in PSNR for smooth transition images like Lena, Cameraman is around 4%, and is
improved by 4% for sharp transition images like Circuit board images.
Transform Floating-point
DCT [13]
Proposed
Ramanujan
DCT
8 x 8 29 36
16 x 16 81 96
32 x 32 209 240
64 x 64 513 576
Transform Floating-point
DCT [13]
(Floating-point
multiplications)
Proposed
Ramanujan
DCT
(Shifts)
8 x 8 12 12
16 x 16 36 36
32 x 32 80 80
64 x 64 384 384
8. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 546
TABLE 8: Performance comparison of Ramanujan DCT and Floating-point DCT
Image Proposed Ramanujan DCT Floating-point DCT[13]
Compression
Ratio
PSNR(dB) Compression
Ratio
PSNR(dB)
Circuit 5.30:1 72.93 5.6:1 69.01
Football 5.62:1 69.24 5.97:1 77.20
Lena 4.20:1 62.04 4.56:1 65.77
Medical Image 5.74:1 62.91 5.74:1 68. 1
9. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 547
FIGURE3: The Original image and Reconstructed image using floating-point DCT and Ramanujan DCT.
Figure 3 shows the original image and the reconstructed image obtained using both RDCT and
DCT2 function applied in JPEG compression standard technique. The difference image obtained
between the original and reconstructed image shows that the error in cosine approximation is
very negligible.
5. CONCLUSIONS
We have presented a method for approximation of the cosine function using Ramanujan ordered
number of degree 2. The cosine function is evaluated using a 4th
degree polynomial with an error
of approximation in the order of 10-3
. This method allows us to evaluate the cosine function using
only integers which are powers of 2 thereby replaces the complex floating-point multiplications by
shifters & adders. This algorithm takes N/2 log2 N shifts and (3N/2 log2 N) - N + 1 addition
operations to evaluate an N-point DCT coefficients. The cosine approximation increases the
overhead on the number of adders by 13.6%. The proposed algorithm reduces the computational
complexity and hence improves the speed of evaluation of the DCT coefficients. The proposed
algorithm reduces the complexity in hardware implementation using FPGA. The results show that
the reconstructed image is almost same as obtained by evaluating the floating-point DCT.
10. Geetha K.S & M.UttaraKumari
International Journal Image Processing, (IJIP), Volume (4): Issue (6) 548
6. REFERENCES
1. J.-M.Muller, Elementary Functions: Algorithms and Implementation, Birkhauser, Boston,
1997.
2. F.de Dinechin and A.Tisserand, “Multipartite table methods,” IEEE Transactions on
Computers, vol.54, no.3,pp. 319-330, Mar. 2005.
3. M.Schulte and J. Stine, “Approximating elementary functions with symmetric bipartite tables,”
IEEE Transactions on Computers, vol. 48, no. 8, pp. 842-847, Aug. 1999.
4. J. Detrey and F.de Dinechin, “Second order function approximation using a single
multiplication on FPGAs,” in 14th International Conference on Field-Programmable Logic and
Applications, Aug.2004, pp.221-230, LNCS 3203.
5. Arnad Tisserand, “Hardware operator for simultaneous sine and cosine evaluation,” in
ICASSP 2006, vol III, pp.992-995.
6. N.Brisebarre, J.-M.Muller, A.Tisserand and S.Torres, “Hardware operators for function
evaluation using sparse-coefficient polynomials,” in Electronic Letters 25, 42(2006), pp. 1441-
1442.
7. T Tran, “The BinDCT: Fast multiplier less approximation of the DCT”. IEEE Signal Proc. Vol.
7, No. 6, pp. 141-44, 2000.
8. Y Zeng, L Cheng, G Bi, and A C Kot, “Integer DCT’s and Fast Algorithms”. IEEE Trans Signal
Processing, Vol. 49, No. 11, Nov. 2001.
9. X Hui and L Cheng, “An Integer Hierarchy: Lapped Biorthogonal Transform via Lifting Steps
and Application in Image Coding”. Proc. ICSP-02, 2002.
10. Nirdosh Bhatnagar, “On computation of certain discrete Fourier transforms using binary
calculus”.Signal Processing-Elsevier Vol43,1995.
11. Geetha.K.S, M.Uttarakumari, “Multiplierless Recursive algorithm using Ramanujan ordered
Numbers,” in IETE Journal of Research, vol. 56, Issue 4, JUL-AUG 2010.
12. K.R.Rao, P.Yip, “Discrete Cosine Transform Algorithms, Advantages Applications”. New
York: Academic 1990.
13. H.S. Hou, “A Fast Recursive Algorithms for Computing the Discrete Cosine Transform”. IEEE
Trans. Acoust., Speech, Signal Processing, Vol.35, pp 1455-1461, Oct 1987.
14. N.I.Cho, S.U.Lee, “A Fast 4X4 DCT Algorithm for the Recursive 2-D DCT”. IEEE Trans.
Signal Processing, Vol.40, pp 2166-2173. Sep 1992.