Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
Interactive Fuzzy Goal Programming approach for Tri-Level Linear Programming ...IJERA Editor
The aim of this paper is to present an interactive fuzzy goal programming approach to determine the preferred
compromise solution to Tri-level linear programming problems considering the imprecise nature of the decision
makers’ judgments for the objectives. Using the concept of goal programming, fuzzy set theory, in combination
with interactive programming, and improving the membership functions by means of changing the tolerances of
the objectives provide a satisfactory compromise (near to ideal) solution to the upper level decision makers.
Two numerical examples for three-level linear programming problems have been solved to demonstrate the
feasibility of the proposed approach. The performance of the proposed approach was evaluated by using of
metric distance functions with other approaches.
This document presents new certified optimal solutions found by the Charibde algorithm for six difficult benchmark optimization problems. Charibde combines an evolutionary algorithm and interval-based methods in a cooperative framework. It has achieved optimality proofs for five bound-constrained problems and one nonlinearly constrained problem. These problems are highly multimodal and some had not been solved before even with approximate methods. The document also compares Charibde's performance to other state-of-the-art solvers, showing it is highly competitive while providing reliable optimality proofs.
A review of automatic differentiationand its efficient implementationssuserfa7e73
Automatic differentiation is a powerful tool for automatically calculating derivatives of mathematical functions and algorithms. It works by expressing the target function as a sequence of elementary operations and then applying the chain rule to differentiate each operation. This can be done using either forward or reverse mode. Forward mode calculates how changes in inputs propagate through the function to influence the outputs, while reverse mode calculates how changes in outputs backpropagate to influence the inputs. Both modes require performing the computation twice - once for the forward pass and once for the derivative pass. Careful implementation is required to make automatic differentiation efficient in terms of speed and memory usage.
Assessing Error Bound For Dominant Point DetectionCSCJournals
This document compares the error bounds of two classes of dominant point detection methods: 1) methods based on reducing a distance metric like maximum deviation or integral square error, and 2) methods based on digital straight segments. For distance-based methods, the error bound is determined by the maximum deviation of pixels from the line segments between dominant points. For digital straight segment methods, the error bound depends on control parameters that define blurred or approximate digital straight segments. The document analyzes specific methods in each class and plots the theoretical error bounds to facilitate understanding and parameter selection for dominant point detection methods.
An Interactive Decomposition Algorithm for Two-Level Large Scale Linear Multi...IJERA Editor
This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution) method for solving
Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand
side of the constraints (TL-LSLMOP-SP)rhs of block angular structure. In order to obtain a compromise (
satisfactory) solution to the (TL-LSLMOP-SP)rhs of block angular structure using the proposed TOPSIS
method, a modified formulas for the distance function from the positive ideal solution (PIS ) and the distance
function from the negative ideal solution (NIS) are proposed and modeled to include all the objective functions
of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective
space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership
functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective
programming problem is obtained by using the max-min operator for the second –order compromise operaion.
A decomposition algorithm for generating a compromise ( satisfactory) solution through TOPSIS approach is
provided where the first level decision maker (FLDM) is asked to specify the relative importance of the
objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.
Classification of handwritten characters by their symmetry featuresAYUSH RAJ
The document presents a technique for classifying handwritten characters based on their symmetry features. The Generalized Symmetry Transform is applied to digits from the USPS dataset to extract symmetry magnitude and orientation maps. These features are used to train Probabilistic Neural Networks, which are then compared to a network trained on the original data. The symmetry-trained networks classify the training data perfectly but generalize poorer than the original data network, achieving 87.2% and 72.2% accuracy respectively compared to 95.17% for the original data network. While symmetry features can classify characters, the original data leads to better generalization performance.
Sequential order vs random order in operators of variable neighborhood descen...TELKOMNIKA JOURNAL
Many optimization problems require heuristic methods to solve the problem. Variable
Neighborhood Search (VNS) is a metaheuristic form that systematically changes its “neighborhood” in
search of solutions. One method in VNS is Variable Neighborhood Descent (VND), which performs a
deterministic neighborhood change. The change of the neighborhood in VND can be done in a random
and sequential order. This paper compares sequential and random neighborhood selection methods in
solving Capacitated Vehicle Routing Problem (CVRP) problems. There are 6 intra-route neighborhood
structures and 4 inter-route structures used. CVRP problems are taken from several dataset providers.
The initial solution is formed by Sequential Insertion method. The experimental results show that the
random selection of neighborhood operators can provide a more optimal route length (in 10 of 13 datasets
used) than that of sequential selection (only better in 3 dataset). However, the random selection takes
more iterations to reach convergent state than the sequential one. For sequential selection, determination
of the neighborhood structure’s order affects the speed to the convergent state. Hence, a random selection
in VND method is more preferable than sequential selection.
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
Interactive Fuzzy Goal Programming approach for Tri-Level Linear Programming ...IJERA Editor
The aim of this paper is to present an interactive fuzzy goal programming approach to determine the preferred
compromise solution to Tri-level linear programming problems considering the imprecise nature of the decision
makers’ judgments for the objectives. Using the concept of goal programming, fuzzy set theory, in combination
with interactive programming, and improving the membership functions by means of changing the tolerances of
the objectives provide a satisfactory compromise (near to ideal) solution to the upper level decision makers.
Two numerical examples for three-level linear programming problems have been solved to demonstrate the
feasibility of the proposed approach. The performance of the proposed approach was evaluated by using of
metric distance functions with other approaches.
This document presents new certified optimal solutions found by the Charibde algorithm for six difficult benchmark optimization problems. Charibde combines an evolutionary algorithm and interval-based methods in a cooperative framework. It has achieved optimality proofs for five bound-constrained problems and one nonlinearly constrained problem. These problems are highly multimodal and some had not been solved before even with approximate methods. The document also compares Charibde's performance to other state-of-the-art solvers, showing it is highly competitive while providing reliable optimality proofs.
A review of automatic differentiationand its efficient implementationssuserfa7e73
Automatic differentiation is a powerful tool for automatically calculating derivatives of mathematical functions and algorithms. It works by expressing the target function as a sequence of elementary operations and then applying the chain rule to differentiate each operation. This can be done using either forward or reverse mode. Forward mode calculates how changes in inputs propagate through the function to influence the outputs, while reverse mode calculates how changes in outputs backpropagate to influence the inputs. Both modes require performing the computation twice - once for the forward pass and once for the derivative pass. Careful implementation is required to make automatic differentiation efficient in terms of speed and memory usage.
Assessing Error Bound For Dominant Point DetectionCSCJournals
This document compares the error bounds of two classes of dominant point detection methods: 1) methods based on reducing a distance metric like maximum deviation or integral square error, and 2) methods based on digital straight segments. For distance-based methods, the error bound is determined by the maximum deviation of pixels from the line segments between dominant points. For digital straight segment methods, the error bound depends on control parameters that define blurred or approximate digital straight segments. The document analyzes specific methods in each class and plots the theoretical error bounds to facilitate understanding and parameter selection for dominant point detection methods.
An Interactive Decomposition Algorithm for Two-Level Large Scale Linear Multi...IJERA Editor
This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution) method for solving
Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand
side of the constraints (TL-LSLMOP-SP)rhs of block angular structure. In order to obtain a compromise (
satisfactory) solution to the (TL-LSLMOP-SP)rhs of block angular structure using the proposed TOPSIS
method, a modified formulas for the distance function from the positive ideal solution (PIS ) and the distance
function from the negative ideal solution (NIS) are proposed and modeled to include all the objective functions
of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective
space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership
functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective
programming problem is obtained by using the max-min operator for the second –order compromise operaion.
A decomposition algorithm for generating a compromise ( satisfactory) solution through TOPSIS approach is
provided where the first level decision maker (FLDM) is asked to specify the relative importance of the
objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.
Classification of handwritten characters by their symmetry featuresAYUSH RAJ
The document presents a technique for classifying handwritten characters based on their symmetry features. The Generalized Symmetry Transform is applied to digits from the USPS dataset to extract symmetry magnitude and orientation maps. These features are used to train Probabilistic Neural Networks, which are then compared to a network trained on the original data. The symmetry-trained networks classify the training data perfectly but generalize poorer than the original data network, achieving 87.2% and 72.2% accuracy respectively compared to 95.17% for the original data network. While symmetry features can classify characters, the original data leads to better generalization performance.
Sequential order vs random order in operators of variable neighborhood descen...TELKOMNIKA JOURNAL
Many optimization problems require heuristic methods to solve the problem. Variable
Neighborhood Search (VNS) is a metaheuristic form that systematically changes its “neighborhood” in
search of solutions. One method in VNS is Variable Neighborhood Descent (VND), which performs a
deterministic neighborhood change. The change of the neighborhood in VND can be done in a random
and sequential order. This paper compares sequential and random neighborhood selection methods in
solving Capacitated Vehicle Routing Problem (CVRP) problems. There are 6 intra-route neighborhood
structures and 4 inter-route structures used. CVRP problems are taken from several dataset providers.
The initial solution is formed by Sequential Insertion method. The experimental results show that the
random selection of neighborhood operators can provide a more optimal route length (in 10 of 13 datasets
used) than that of sequential selection (only better in 3 dataset). However, the random selection takes
more iterations to reach convergent state than the sequential one. For sequential selection, determination
of the neighborhood structure’s order affects the speed to the convergent state. Hence, a random selection
in VND method is more preferable than sequential selection.
This document proposes an approach for interactively smoothing experimental data using a Savitzky-Golay (S-G) filter. An optimization problem is formulated to minimize two criteria: total absolute error and integral smoothness. Pareto optimal solutions are determined for the filter parameters of polynomial degree and number of supporting points. The μ-selection method is used to find subsets of Pareto optimal solutions ranked by compromised efficiency. The highest ranked solution is the Salukvadze optimum that balances the criteria most evenly. The approach allows choosing S-G filter parameters that adequately represent and smooth the data. An example applies this to smooth measured acceleration data from a vibrating mechanism.
Second or fourth-order finite difference operators, which one is most effective?Premier Publishers
This paper presents higher-order finite difference (FD) formulas for the spatial approximation of the time-dependent reaction-diffusion problems with a clear justification through examples, “why fourth-order FD formula is preferred to its second-order counterpart” that has been widely used in literature. As a consequence, methods for the solution of initial and boundary value PDEs, such as the method of lines (MOL), is of broad interest in science and engineering. This procedure begins with discretizing the spatial derivatives in the PDE with algebraic approximations. The key idea of MOL is to replace the spatial derivatives in the PDE with the algebraic approximations. Once this procedure is done, the spatial derivatives are no longer stated explicitly in terms of the spatial independent variables. In other words, only one independent variable is remaining, the resulting semi-discrete problem has now become a system of coupled ordinary differential equations (ODEs) in time. Thus, we can apply any integration algorithm for the initial value ODEs to compute an approximate numerical solution to the PDE. Analysis of the basic properties of these schemes such as the order of accuracy, convergence, consistency, stability and symmetry are well examined.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
This document presents an optimal general type-2 fuzzy controller (OGT2FC) for controlling traffic signal scheduling and phase succession to minimize wait times and average queue length. The OGT2FC uses a combination of general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) to optimize the membership function parameters. Simulation results show the OGT2FC performs better than conventional type-1 fuzzy controllers in regulating urban traffic flow.
A NEW ALGORITHM FOR SOLVING FULLY FUZZY BI-LEVEL QUADRATIC PROGRAMMING PROBLEMSorajjournal
This paper is concerned with new method to find the fuzzy optimal solution of fully fuzzy bi-level non-linear (quadratic) programming (FFBLQP) problems where all the coefficients and decision variables of both objective functions and the constraints are triangular fuzzy numbers (TFNs). A new method is based on decomposed the given problem into bi-level problem with three crisp quadratic objective functions and bounded variables constraints. In order to often a fuzzy optimal solution of the FFBLQP problems, the concept of tolerance membership function is used to develop a fuzzy max-min decision model for generating satisfactory fuzzy solution for FFBLQP problems in which the upper-level decision maker (ULDM) specifies his/her objective functions and decisions with possible tolerances which are described by membership functions of fuzzy set theory. Then, the lower-level decision maker (LLDM) uses this preference information for ULDM and solves his/her problem subject to the ULDMs restrictions. Finally, the decomposed method is illustrated by numerical example.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
Chapter provides an overview of a number of methods for visualizing tensor data. It explains principal component analysis as a technique used to process a tensor matrix and extract from it information that can directly be used in its visualization. It forms a fundamental part of many tensor data processing and visualization algorithms. Section 7.4 shows how the results of the principal component analysis can be visualized using the simple color-mapping techniques. Next parts of the chapter explain how same data can be visualized using tensor glyphs, and streamline-like visualization techniques.
In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
This document discusses error propagation in quantitative analysis based on ratio spectra. It presents the mathematical foundations for studying noise-filtering properties and error propagation using ratio spectra and mean-centered ratio spectra methods. Computer modeling was performed to evaluate the standard deviation ratios and condition numbers for binary mixtures modeled by Gaussian doublets, showing the mean-centered ratio spectra method is more advantageous for proportional noise typical of UV-VIS instruments. The advantages and limitations of using ratio spectra methods are discussed.
The document proposes a new method for approximating matrix finite impulse response (FIR) filters using lower order infinite impulse response (IIR) filters. The method is based on approximating descriptor systems and requires only standard linear algebraic routines. Both optimal and suboptimal cases are addressed in a unified treatment. The solution is derived using only the Markov parameters of the FIR filter and can be expressed in state-space or transfer function form. The effectiveness of the method is illustrated with a numerical example and additional applications are discussed.
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
Texture analysis such as segmentation and classification plays a vital role in computer vision and pattern recognition and is widely applied to many areas such as industrial automation, bio-medical image processing and remote sensing. In this paper, we first extend the well-known Gabor filters to color images using a specific form of hypercomplex numbers known as quaternions. These filters are
constructed as windowed basis functions of the quaternion Fourier transform also known as hypercomplex Fourier transform. Based on this extension this paper presents the use of these new
quaternionic Gabor filters in colour texture image segmentation. Experimental results on two colour texture images are presented. We tested the robustness of this technique for segmentation by adding Gaussian noise to the texture images. Experimental results indicate that the proposed method gives better segmentation results even in the presence of strongest noise.
Machine learning is presented by Pranay Rajput. The agenda includes an introduction to machine learning, basics, classification, regression, clustering, distance metrics, and use cases. ML allows computer programs to learn from experience to improve performance on tasks. Supervised learning predicts labels or targets while unsupervised learning finds hidden patterns in unlabeled data. Popular algorithms include classification, regression, and clustering. Classification predicts class labels, regression predicts continuous values, and clustering groups similar data points. Distance metrics like Euclidean, Manhattan, and cosine are used in ML models to measure similarity between data points. Common applications involve recommendation systems, computer vision, natural language processing, and fraud detection. Popular frameworks for ML include scikit-learn, TensorFlow, Keras
In this paper, we provide the average bit error probabilities of MQAM and MPSK in the presence of log normal shadowing using Maximal Ratio Combining technique for L diversity branches. We have derived probability of density function (PDF) of received signal to noise ratio (SNR) for L diversity branches in Log Normal fadingfor Maximal Ratio Combining (MRC). We have used Fenton-Wilkinson method to estimate the parameters for a single log-normal distribution that approximates the sum of log-normal random variables (RVs). The results that we provide in this paper are an important tool for measuring the performance ofcommunication links in a log-normal shadowing.
Branch and Bound Feature Selection for Hyperspectral Image Classification Sathishkumar Samiappan
Feature selection (FS) is a classical combinatorial problem in pattern recognition and data mining. It finds major importance in classification and regression scenarios. In this paper, a hybrid approach that combines branch-and-bound (BB) search with Bhattacharya distance based feature selection is presented for classifying hyperspectral data using Support Vector Machine (SVM) classifiers. The performance of this hybrid approach is compared to another hybrid approach that uses genetic algorithm (GA) based feature selection in place of BB. It is also compared to baseline SVMs with no feature reduction. Experimental results using hyperspectral data show that under small sample size situations, BB approach performs better than GA and SVM with no feature selection.
A problem is provided which is solved by using graphical and analytical method of linear programming method and then it is solved by using geometrical concept and algebraic concept of simplex method.
This document provides an introduction to numerical methods and MATLAB programming for engineers. It covers topics such as vectors, functions, plots, and programming in MATLAB. The document is divided into multiple parts that cover various numerical methods topics, including solving equations, linear algebra, functions and data, and differential equations. MATLAB code and examples are provided throughout to demonstrate numerical techniques. The overall goal is to introduce both concepts of numerical methods and MATLAB programming within an engineering context.
The document discusses the simplex method for solving linear programming problems (LPP) with greater-than-or-equal-to constraints. It introduces the Big-M method, which transforms the LPP into standard form by adding artificial variables and penalizing them in the objective function. An example problem is presented to illustrate how artificial variables are incorporated and cost coefficients are transformed before constructing the simplex tableau. The document also discusses how the simplex method handles unbounded, multiple, and infeasible solutions. It explains that the simplex method can be used for both maximization and minimization problems with some minor modifications.
Trust Region Algorithm - Bachelor DissertationChristian Adom
The document summarizes the trust region algorithm for solving unconstrained optimization problems. It begins by introducing trust region methods and comparing them to line search algorithms. The basic trust region algorithm is then outlined, which approximates the objective function within a region using a quadratic model at each iteration. It discusses solving the trust region subproblem to find a step that minimizes the model within the trust region. Finally, it introduces the Cauchy point and double dogleg step as methods for solving the subproblem.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
The document discusses the simplex algorithm for solving linear programming problems. It begins with an introduction and overview of the simplex algorithm. It then describes the key steps of the algorithm, which are: 1) converting the problem into slack format, 2) constructing the initial simplex tableau, 3) selecting the pivot column and calculating the theta ratio to determine the departing variable, 4) pivoting to create the next tableau. The document provides examples to illustrate these steps. It also briefly discusses cycling issues, software implementations, efficiency considerations and variants of the simplex algorithm.
This document discusses using isogeometric analysis to solve partial differential equations (PDEs) on lower dimensional manifolds, specifically surfaces. It introduces representing surfaces using non-uniform rational B-splines (NURBS) parametrization and mapping the surface to physical space. It proposes using the same NURBS basis functions for spatial discretization in isogeometric analysis to exactly represent the surface geometry. The document outlines error estimates for isogeometric analysis of second order PDEs on surfaces and highlights the accuracy and efficiency benefits of exact surface representation. Several examples of PDEs on surfaces, like the Laplace-Beltrami problem, are solved to demonstrate isogeometric analysis.
This document summarizes an academic paper that proposes modifying well-known local linear models for system identification by replacing their original recursive learning rules with outlier-robust variants based on M-estimation. It describes three existing local linear models - local linear map (LLM), radial basis function network (RBFN), and local model network (LMN) - and then introduces the concept of M-estimation as a way to make the learning rules of these models more robust to outliers. The performance of the proposed outlier-robust variants is evaluated on three benchmark datasets and is found to provide considerable improvement in the presence of outliers compared to the original models.
This document proposes a low complexity algorithm for jointly estimating the reflection coefficient, spatial location, and Doppler shift of a target for MIMO radar systems. It splits the estimation problem into two parts. The first part estimates the reflection coefficient in closed form. The second part jointly estimates the spatial location and Doppler shift using a 2D FFT approach. This allows significantly lower computational complexity compared to maximum likelihood estimation. Simulation results show the proposed estimator achieves the Cramér-Rao lower bound, providing optimal performance with low complexity.
This document proposes an approach for interactively smoothing experimental data using a Savitzky-Golay (S-G) filter. An optimization problem is formulated to minimize two criteria: total absolute error and integral smoothness. Pareto optimal solutions are determined for the filter parameters of polynomial degree and number of supporting points. The μ-selection method is used to find subsets of Pareto optimal solutions ranked by compromised efficiency. The highest ranked solution is the Salukvadze optimum that balances the criteria most evenly. The approach allows choosing S-G filter parameters that adequately represent and smooth the data. An example applies this to smooth measured acceleration data from a vibrating mechanism.
Second or fourth-order finite difference operators, which one is most effective?Premier Publishers
This paper presents higher-order finite difference (FD) formulas for the spatial approximation of the time-dependent reaction-diffusion problems with a clear justification through examples, “why fourth-order FD formula is preferred to its second-order counterpart” that has been widely used in literature. As a consequence, methods for the solution of initial and boundary value PDEs, such as the method of lines (MOL), is of broad interest in science and engineering. This procedure begins with discretizing the spatial derivatives in the PDE with algebraic approximations. The key idea of MOL is to replace the spatial derivatives in the PDE with the algebraic approximations. Once this procedure is done, the spatial derivatives are no longer stated explicitly in terms of the spatial independent variables. In other words, only one independent variable is remaining, the resulting semi-discrete problem has now become a system of coupled ordinary differential equations (ODEs) in time. Thus, we can apply any integration algorithm for the initial value ODEs to compute an approximate numerical solution to the PDE. Analysis of the basic properties of these schemes such as the order of accuracy, convergence, consistency, stability and symmetry are well examined.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
This document presents an optimal general type-2 fuzzy controller (OGT2FC) for controlling traffic signal scheduling and phase succession to minimize wait times and average queue length. The OGT2FC uses a combination of general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) to optimize the membership function parameters. Simulation results show the OGT2FC performs better than conventional type-1 fuzzy controllers in regulating urban traffic flow.
A NEW ALGORITHM FOR SOLVING FULLY FUZZY BI-LEVEL QUADRATIC PROGRAMMING PROBLEMSorajjournal
This paper is concerned with new method to find the fuzzy optimal solution of fully fuzzy bi-level non-linear (quadratic) programming (FFBLQP) problems where all the coefficients and decision variables of both objective functions and the constraints are triangular fuzzy numbers (TFNs). A new method is based on decomposed the given problem into bi-level problem with three crisp quadratic objective functions and bounded variables constraints. In order to often a fuzzy optimal solution of the FFBLQP problems, the concept of tolerance membership function is used to develop a fuzzy max-min decision model for generating satisfactory fuzzy solution for FFBLQP problems in which the upper-level decision maker (ULDM) specifies his/her objective functions and decisions with possible tolerances which are described by membership functions of fuzzy set theory. Then, the lower-level decision maker (LLDM) uses this preference information for ULDM and solves his/her problem subject to the ULDMs restrictions. Finally, the decomposed method is illustrated by numerical example.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
Chapter provides an overview of a number of methods for visualizing tensor data. It explains principal component analysis as a technique used to process a tensor matrix and extract from it information that can directly be used in its visualization. It forms a fundamental part of many tensor data processing and visualization algorithms. Section 7.4 shows how the results of the principal component analysis can be visualized using the simple color-mapping techniques. Next parts of the chapter explain how same data can be visualized using tensor glyphs, and streamline-like visualization techniques.
In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
This document discusses error propagation in quantitative analysis based on ratio spectra. It presents the mathematical foundations for studying noise-filtering properties and error propagation using ratio spectra and mean-centered ratio spectra methods. Computer modeling was performed to evaluate the standard deviation ratios and condition numbers for binary mixtures modeled by Gaussian doublets, showing the mean-centered ratio spectra method is more advantageous for proportional noise typical of UV-VIS instruments. The advantages and limitations of using ratio spectra methods are discussed.
The document proposes a new method for approximating matrix finite impulse response (FIR) filters using lower order infinite impulse response (IIR) filters. The method is based on approximating descriptor systems and requires only standard linear algebraic routines. Both optimal and suboptimal cases are addressed in a unified treatment. The solution is derived using only the Markov parameters of the FIR filter and can be expressed in state-space or transfer function form. The effectiveness of the method is illustrated with a numerical example and additional applications are discussed.
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
Texture analysis such as segmentation and classification plays a vital role in computer vision and pattern recognition and is widely applied to many areas such as industrial automation, bio-medical image processing and remote sensing. In this paper, we first extend the well-known Gabor filters to color images using a specific form of hypercomplex numbers known as quaternions. These filters are
constructed as windowed basis functions of the quaternion Fourier transform also known as hypercomplex Fourier transform. Based on this extension this paper presents the use of these new
quaternionic Gabor filters in colour texture image segmentation. Experimental results on two colour texture images are presented. We tested the robustness of this technique for segmentation by adding Gaussian noise to the texture images. Experimental results indicate that the proposed method gives better segmentation results even in the presence of strongest noise.
Machine learning is presented by Pranay Rajput. The agenda includes an introduction to machine learning, basics, classification, regression, clustering, distance metrics, and use cases. ML allows computer programs to learn from experience to improve performance on tasks. Supervised learning predicts labels or targets while unsupervised learning finds hidden patterns in unlabeled data. Popular algorithms include classification, regression, and clustering. Classification predicts class labels, regression predicts continuous values, and clustering groups similar data points. Distance metrics like Euclidean, Manhattan, and cosine are used in ML models to measure similarity between data points. Common applications involve recommendation systems, computer vision, natural language processing, and fraud detection. Popular frameworks for ML include scikit-learn, TensorFlow, Keras
In this paper, we provide the average bit error probabilities of MQAM and MPSK in the presence of log normal shadowing using Maximal Ratio Combining technique for L diversity branches. We have derived probability of density function (PDF) of received signal to noise ratio (SNR) for L diversity branches in Log Normal fadingfor Maximal Ratio Combining (MRC). We have used Fenton-Wilkinson method to estimate the parameters for a single log-normal distribution that approximates the sum of log-normal random variables (RVs). The results that we provide in this paper are an important tool for measuring the performance ofcommunication links in a log-normal shadowing.
Branch and Bound Feature Selection for Hyperspectral Image Classification Sathishkumar Samiappan
Feature selection (FS) is a classical combinatorial problem in pattern recognition and data mining. It finds major importance in classification and regression scenarios. In this paper, a hybrid approach that combines branch-and-bound (BB) search with Bhattacharya distance based feature selection is presented for classifying hyperspectral data using Support Vector Machine (SVM) classifiers. The performance of this hybrid approach is compared to another hybrid approach that uses genetic algorithm (GA) based feature selection in place of BB. It is also compared to baseline SVMs with no feature reduction. Experimental results using hyperspectral data show that under small sample size situations, BB approach performs better than GA and SVM with no feature selection.
A problem is provided which is solved by using graphical and analytical method of linear programming method and then it is solved by using geometrical concept and algebraic concept of simplex method.
This document provides an introduction to numerical methods and MATLAB programming for engineers. It covers topics such as vectors, functions, plots, and programming in MATLAB. The document is divided into multiple parts that cover various numerical methods topics, including solving equations, linear algebra, functions and data, and differential equations. MATLAB code and examples are provided throughout to demonstrate numerical techniques. The overall goal is to introduce both concepts of numerical methods and MATLAB programming within an engineering context.
The document discusses the simplex method for solving linear programming problems (LPP) with greater-than-or-equal-to constraints. It introduces the Big-M method, which transforms the LPP into standard form by adding artificial variables and penalizing them in the objective function. An example problem is presented to illustrate how artificial variables are incorporated and cost coefficients are transformed before constructing the simplex tableau. The document also discusses how the simplex method handles unbounded, multiple, and infeasible solutions. It explains that the simplex method can be used for both maximization and minimization problems with some minor modifications.
Trust Region Algorithm - Bachelor DissertationChristian Adom
The document summarizes the trust region algorithm for solving unconstrained optimization problems. It begins by introducing trust region methods and comparing them to line search algorithms. The basic trust region algorithm is then outlined, which approximates the objective function within a region using a quadratic model at each iteration. It discusses solving the trust region subproblem to find a step that minimizes the model within the trust region. Finally, it introduces the Cauchy point and double dogleg step as methods for solving the subproblem.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
The document discusses the simplex algorithm for solving linear programming problems. It begins with an introduction and overview of the simplex algorithm. It then describes the key steps of the algorithm, which are: 1) converting the problem into slack format, 2) constructing the initial simplex tableau, 3) selecting the pivot column and calculating the theta ratio to determine the departing variable, 4) pivoting to create the next tableau. The document provides examples to illustrate these steps. It also briefly discusses cycling issues, software implementations, efficiency considerations and variants of the simplex algorithm.
This document discusses using isogeometric analysis to solve partial differential equations (PDEs) on lower dimensional manifolds, specifically surfaces. It introduces representing surfaces using non-uniform rational B-splines (NURBS) parametrization and mapping the surface to physical space. It proposes using the same NURBS basis functions for spatial discretization in isogeometric analysis to exactly represent the surface geometry. The document outlines error estimates for isogeometric analysis of second order PDEs on surfaces and highlights the accuracy and efficiency benefits of exact surface representation. Several examples of PDEs on surfaces, like the Laplace-Beltrami problem, are solved to demonstrate isogeometric analysis.
This document summarizes an academic paper that proposes modifying well-known local linear models for system identification by replacing their original recursive learning rules with outlier-robust variants based on M-estimation. It describes three existing local linear models - local linear map (LLM), radial basis function network (RBFN), and local model network (LMN) - and then introduces the concept of M-estimation as a way to make the learning rules of these models more robust to outliers. The performance of the proposed outlier-robust variants is evaluated on three benchmark datasets and is found to provide considerable improvement in the presence of outliers compared to the original models.
This document proposes a low complexity algorithm for jointly estimating the reflection coefficient, spatial location, and Doppler shift of a target for MIMO radar systems. It splits the estimation problem into two parts. The first part estimates the reflection coefficient in closed form. The second part jointly estimates the spatial location and Doppler shift using a 2D FFT approach. This allows significantly lower computational complexity compared to maximum likelihood estimation. Simulation results show the proposed estimator achieves the Cramér-Rao lower bound, providing optimal performance with low complexity.
This document discusses various numerical analysis methods for solving differential and partial differential equations. It begins with a brief history of numerical analysis, then discusses different interpolation methods like Lagrangian interpolation. It also covers finite difference methods, finite element methods, spectral methods, and the method of lines - explaining how each method discretizes equations. The document concludes by discussing multigrid methods, which use a hierarchy of grids to accelerate convergence in solving equations.
This document discusses numerical methods for solving differential and partial differential equations. It begins by providing some historical context on the development of numerical analysis. It then discusses several common numerical methods including Lagrangian interpolation, finite difference methods, finite element methods, spectral methods, and finite volume methods. For each method, it provides a brief overview of the approach and discusses aspects like discretization, accuracy, computational cost, and common applications. Overall, the document serves as an introduction to various numerical techniques for approximating solutions to differential equations.
This document discusses algorithms for computing the Kolmogorov-Smirnov distribution, which is used to measure goodness of fit between empirical and theoretical distributions. It describes existing algorithms that are either fast but unreliable or precise but slow. The authors propose a new algorithm that uses different approximations depending on sample size and test statistic value, to provide fast and reliable computation of both the distribution and its complement. Their C program implements this approach using multiple methods, including an exact but slow recursion formula and faster but less precise approximations for large samples.
Discrete-wavelet-transform recursive inverse algorithm using second-order est...TELKOMNIKA JOURNAL
The recursive-least-squares (RLS) algorithm was introduced as an alternative to LMS algorithm with enhanced performance. Computational complexity and instability in updating the autocolleltion matrix are some of the drawbacks of the RLS algorithm that were among the reasons for the intrduction of the second-order recursive inverse (RI) adaptive algorithm. The 2nd order RI adaptive algorithm suffered from low convergence rate in certain scenarios that required a relatively small initial step-size. In this paper, we propose a newsecond-order RI algorithm that projects the input signal to a new domain namely discrete-wavelet-transform (DWT) as pre step before performing the algorithm. This transformation overcomes the low convergence rate of the second-order RI algorithm by reducing the self-correlation of the input signal in the mentioned scenatios. Expeirments are conducted using the noise cancellation setting. The performance of the proposed algorithm is compared to those of the RI, original second-order RI and RLS algorithms in different Gaussian and impulsive noise environments. Simulations demonstrate the superiority of the proposed algorithm in terms of convergence rate comparedto those algorithms.
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
Intercept probability analysis in DF time switching full-duplex relaying netw...TELKOMNIKA JOURNAL
In this research, we propose and investigate intercept probability analysis in DF time switching relaying full-duplex with impact of Co-channel interference at the eavesdropper. In the beginning stage, we present the DF time switching relaying full-duplex with the Impact of Co-channel interference at the eavesdropper. Furthermore, the closed-form expression of the intercept probability (IP) is analyzed and derived in connection with the primary system parameters. Finally, the Monte Carlo simulation is performed for verifying the correctness of the analytical section. From the research results, a novel solution and some recommendations can be proposed for the communication network in the near future.
This document summarizes finite difference modeling methods used at M-OSRP. It discusses:
1) The second order time and fourth order space finite difference schemes used to model acoustic wave propagation.
2) How boundary conditions like Dirichlet/Neumann generate strong spurious reflections that can mask true events.
3) The importance of accurate source fields for modeling - better source fields lead to more accurate linear inversions and the ability to observe phenomena like polarity reversals in modeled data.
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
Low power cost rns comparison via partitioning the dynamic rangeNexgen Technology
2016 ieee project ,2016-2017 ieee projects, application projects, best ieee projects, bulk final year projects, bulk ieee projects ,diploma projects electrical engineering electrical engineering projects ,final year application projects, final year csc projects, final year cse project, final year it projects ,final year project, final year projects, final year projects in chennai ,final year projects in coimabtore, final year projects in hyderabad, final year projects in pondicherry final year projects in rajasthan ,ieee based projects for ece, ieee final year projects, ieee master, ieee project, ieee project 2015 ,ieee project 2016, ieee project centers in pondicherry ,ieee project for eee, ieee projects, ieee projects ,2015-2016 ieee projects, 2016-2017 ieee projects, cse ieee projects, cse 2015 ieee projects, cse 2016 ieee projects for cse ,ieee projects for it, ieee projects in bangalore, ieee projects in chennai, ieee projects in coimbatore, ieee projects in hyderabad ,ieee projects in madurai ,ieee projects in maharashtra ,ieee projects in mumbai, ieee projects in odisha, ieee projects in orissa, ieee projects in pondicherry, ieee projects in pondy ,ieee projects in pune, ieee projects in uttarakhand, ieee projects titles, 2015-2016 latest projects for eee, NEXGEN TECHNOLOGY mtech ieee projects mtech projects 2016-2017 mtech projects in chennai mtech, projects in cuddalore ,mtech projects in neyveli, mtech projects in panruti, mtech projects in pondicherry, mtech projects in tindivanam, mtech projects in villupuram, online ieee projects ,phd guidance, project for engineering ,project titles for ece
Cooperative underlay cognitive radio assisted NOMA: secondary network improve...TELKOMNIKA JOURNAL
In this paper, a downlink scenario of a non-orthogonal multiple access (NOMA) scheme with power constraint via spectrum sensing is considered. Such network provides improved outage performance and new scheme of NOMA-based cognitive radio (CR-NOMA) network are introduced. The different power allocation factors are examined subject to performance gap among these secondary NOMA users. To evaluate system performance, the exact outage probability expressions of secondary users are derived. Finally, the dissimilar performance problem in term of secondary users is illustrated via simulation, in which a power allocation scheme and the threshold rates are considered as main impacts of varying system performance. The simulation results show that the performance of CR-NOMA network can be improved significantly.
This document summarizes a study on analyzing the impact of impulse noise on OFDM systems using three adaptive algorithms: LMS, NLMS, and RLS. It first describes OFDM systems and impulse noise modeling. It then provides details on the three algorithms - LMS uses a least mean square approach, NLMS is a normalized version of LMS, and RLS uses a recursive least squares approach. Simulation results show transmitted OFDM signals and spectra, as well as BER plots for the different algorithms under varying SNR levels. RLS is found to have the best performance with minimum BER, followed by NLMS, and then LMS. The document concludes RLS is the best algorithm to use for its sustainability to higher
BER Analysis ofImpulse Noise inOFDM System Using LMS,NLMS&RLSiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
BEHAVIOR STUDY OF ENTROPY IN A DIGITAL IMAGE THROUGH AN ITERATIVE ALGORITHM O...ijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern
recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images
through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined.
The behavior of Shannon entropy is analyzed and then compared, taking into account the number of
iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The
use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m-
dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used
to group the the iterations, in order to caractrizes the performance of the algorithm
1. The document describes using a finite difference method to solve a boundary value problem numerically.
2. It presents the theory behind finite difference methods and discretizing differential equations into algebraic equations that can be solved on a computer.
3. It includes the MATLAB script that implements the finite difference method to solve the given boundary value problem numerically, compares the numerical solution to the exact solution, and plots the results.
Behavior study of entropy in a digital image through an iterative algorithmijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined. The behavior of Shannon entropy is analyzed and then compared, taking into account the number of iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used to group the the iterations, in order to caractrizes the performance of the algorithm.
This document compares the accuracy of determining volumes using close range photogrammetry versus traditional methods. It presents a case study where the volume of a test field was calculated using both approaches. Using traditional methods with 425 control points, the volume was calculated as 221475.14 m3 using trapezoidal rules, 221424.52 m3 using Simpson's rule, and 221484.05 m3 using Simpson's 3/8 rule. Using close range photogrammetry with 42 control points and 574 generated points, the volume was calculated as 215310.60 m3 using trapezoidal rules, 215300.43 m3 using Simpson's rule, and 215304.12 m3 using Simpson's 3
Computer Science
Active and Programmable Networks
Active safety systems
Ad Hoc & Sensor Network
Ad hoc networks for pervasive communications
Adaptive, autonomic and context-aware computing
Advance Computing technology and their application
Advanced Computing Architectures and New Programming Models
Advanced control and measurement
Aeronautical Engineering,
Agent-based middleware
Alert applications
Automotive, marine and aero-space control and all other control applications
Autonomic and self-managing middleware
Autonomous vehicle
Biochemistry
Bioinformatics
BioTechnology(Chemistry, Mathematics, Statistics, Geology)
Broadband and intelligent networks
Broadband wireless technologies
CAD/CAM/CAT/CIM
Call admission and flow/congestion control
Capacity planning and dimensioning
Changing Access to Patient Information
Channel capacity modelling and analysis
Civil Engineering,
Cloud Computing and Applications
Collaborative applications
Communication application
Communication architectures for pervasive computing
Communication systems
Computational intelligence
Computer and microprocessor-based control
Computer Architecture and Embedded Systems
Computer Business
Computer Sciences and Applications
Computer Vision
Computer-based information systems in health care
Computing Ethics
Computing Practices & Applications
Congestion and/or Flow Control
Content Distribution
Context-awareness and middleware
Creativity in Internet management and retailing
Cross-layer design and Physical layer based issue
Cryptography
Data Base Management
Data fusion
Data Mining
Data retrieval
Data Storage Management
Decision analysis methods
Decision making
Digital Economy and Digital Divide
Digital signal processing theory
Distributed Sensor Networks
Drives automation
Drug Design,
Drug Development
DSP implementation
E-Business
E-Commerce
E-Government
Electronic transceiver device for Retail Marketing Industries
Electronics Engineering,
Embeded Computer System
Emerging advances in business and its applications
Emerging signal processing areas
Enabling technologies for pervasive systems
Energy-efficient and green pervasive computing
Environmental Engineering,
Estimation and identification techniques
Evaluation techniques for middleware solutions
Event-based, publish/subscribe, and message-oriented middleware
Evolutionary computing and intelligent systems
Expert approaches
Facilities planning and management
Flexible manufacturing systems
Formal methods and tools for designing
Fuzzy algorithms
Fuzzy logics
GPS and location-based app
Similar to Real interpolation method for transfer function approximation of distributed parameter system (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
2. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 4, August 2019: 1941-1947
1942
which assigns the image function 𝐹(𝛿) in accordance to the original function 𝑓(𝑡) as a function
of the real variable 𝛿. Formula of direct transform can be considered as a special case of
the direct Laplace transform by replacing the complex variable 𝑠 for real 𝛿 variable. Another
step towards the development of the instrumentation method is the transition from continuous
functions 𝐹(𝛿) to their discrete form, using the computing resources and numerical methods.
For these purposes, RIM is represented by the numerical characteristics {𝐹(𝛿𝑖)} 𝑁. They are
obtained as a set of values of function 𝐹(𝛿) in the nodes 𝛿𝑖 𝑤ℎ𝑒𝑟𝑒 𝑖 ∈ 1,2, . . . 𝑁, where 𝑁 is
the number of elements of numerical characteristics, called its dimension [18-21].
Selecting of interpolations 𝛿𝑖 is a primary step in the transition to a discrete form, which
has a significant impact on the numerical computing and accuracy of problem solutions.
Distribution of nodes in the simplest variant is uniform. Another important advantage of the RIM
is cross-conversion property. It dues to the fact that the behavior of the function 𝐹(𝛿) for large
values of the argument 𝛿 is determined mainly by the behavior of the original 𝑓(𝑡) for small
values of the variable t. In the opposite case, the result is the same: the behavior of the function
𝐹(𝛿) for small values of the argument 𝛿 is determined mainly by the behavior of the original 𝑓(𝑡)
for large values of the variable 𝑡.
3. Approximation Algorithm
In this paper, we consider the following approximation task of distributed parameter
systems. The transfer function of DPS is complex, containing fractional and/or transcendental
components. The transfer functions of these systems typically have the form such
as formula (1). Because of the presence of fractional and/or transcendental components in
the transfer function, which not allow using method and means of lumped parameters systems.
𝐺(𝑠) = 𝐺 (𝑒−√𝑇1 𝑠
,
1
√𝑇2 𝑠
, √ 𝑠, 𝑠ℎ(𝑠), 𝑐ℎ(𝑠) … . ) (2)
Let us consider rational transfer function:
W(s) =
B(s)
A(s)
=
bmsm+⋯+b1s+b0
ansn+⋯+a1s+a0
(3)
where 𝑚 ≤ 𝑛; 𝑚, 𝑛 are the integer, which should be used to approximate transfer function 𝐺(𝑠)
of linear the fractional order system. For (𝐺(0) ≠ 0, 𝑏0 = 1) or (𝐺(0) = 0, 𝑎0 = 1) there are
𝑁 = 𝑛 + 𝑚 + 1 real coefficients which should be determined from 𝑁 equations obtained from
the condition of overlapping the numerical characteristics in the corresponding discrete points:
𝐺(𝛿𝑖) −
𝐵(𝛿 𝑖)
𝐴(𝛿 𝑖)
= 0, 𝑖 = 1, 𝑁̅̅̅̅̅, (4)
𝐺(𝛿𝑖)𝐴(𝛿𝑖) − 𝐵(𝛿𝑖) = 0, 𝑖 = 1, 𝑁̅̅̅̅̅. (5)
or for G(0) = 0, a0 = 1 one obtained
𝑎 𝑛 𝛿𝑖
𝑛
𝐺(𝛿𝑖) + ⋯ + 𝑎1 𝛿𝑖 𝐺(𝛿𝑖) − 𝑏 𝑚 𝛿𝑖
𝑚
− ⋯ − 𝑏0 = −𝐺(𝛿𝑖), 𝑖 = 1, 𝑁̅̅̅̅̅. (6)
For fixed 𝛿𝑖 both numerator and denominator polynomials are linear combinations of
the unknown process parameters. Thus, the set of (6) represents a linear system of equations
having N linear equations, one obtains N coefficients of the rational approximation (4).
The obtained (6) are conveniently rewritten in the following matrix form, which is easily solved
using some of the modern computer algebra packages, in particular, introducing
M =
[
δN,1
n
G(δN,1)
δN,2
n
G(δN,2)
…
…
δN−n,1G(δN−n,1)
δN−n,2G(δN−n,2)
−δN−n−1,1
m
−δN−n−1,2
m
…
. . .
−1
−1
…
δN,N
n
G(δN,N) … δN−n,NG(δN−n,N)−δN−n−1,N
m … −1]
(7)
3. TELKOMNIKA ISSN: 1693-6930
Real interpolation method for transfer function approximation of distributed... (Phu Tran Tin)
1943
B = [
−G(δ1)
−G(δ2)
…
−G(δN)
] (8)
one easily obtains the desired system of linear equations in matrix form
𝑀 ∙ 𝑋 = 𝐵 (9)
where X is the vector of unknown parameters:
𝑋 =
[
𝑎 𝑛
𝑎 𝑛−1
…
𝑎1
𝑏 𝑚
. . .
𝑏0 ]
. (10)
It is important to mention that the selected set of points 𝛿 ∈ [ 𝛿1, 𝛿2, . . , 𝛿 𝑁] can produce
a singular matrix from the set of equations. In such a case, another, more appropriate set of
points should be used. It is also significant to note that it is also possible to use more than
n incident points in the selected set. The exact solution cannot be found in such a case.
4. Numerical examples and discussion
In the first example, we consider the system with the distributed parameter transfer
function, in which exact solution is known to estimate the result of approximation task,
by comparing between the Heaviside responses, Bode characteristics of the approximation
models and the exact model [22-26]. The given transfer function has a form:
G1(s) = e−√s
(11)
The transient function of Heaviside excitation is a function: ℎ1(𝑡) = 𝑒𝑟𝑓𝑐(
1
2√𝑡
).
According to the considered model (10)∶ 𝐺1(∞) → 0 and 𝐺1(0) = 1, we have chosen
the approximation model (4) with 𝑏0 = 1, 𝑎0 = 1 and the order of denominator of transfer function
must be higher than the order of numerator. Simply, the orders of the approximated rational
transfer function being considered are 𝑚 = 6 in numeratthe or, and 𝑛 = 7 in denominathe tor.
It means that number of a unknown coefficients is 𝑁 = 𝑛 + 𝑚 = 13. Corresponding to 7th order
of the ration approximated transfer function, the number of unknown coefficient is 𝑁 = 13.
In the RIM we choose values of the nodes 𝛿𝑖 in range the [0.001 ÷ 1] with N=13 nodes equally
spaced. The results of approximation process will be analysed in the time and frequency
domains and compared to other methods with same order of approximation model, where
∆h1(t) is the error of time response by RIM method.
Figure 1 and Figure 2 show that the approximation model is fitted with a real model.
The high accuracy presents in the range [0-100] sec and approximation error reaches a peak at
around 300 s. The approximation errors of time responses are illustrated in Figure 2, where h1(t)
is the exact time response; h1A(t) - time response by RIM method.
The Bode plots of the approximation models are shown in Figures 3 and 4. The Bode
plots illustrate the logarithmic magnitude, phase responses, and errors plots, respectively.
Figures 3 and 4 show that the errors in the magnitude and phase responses of the considered
approximation methods present the lowest value in low frequency ranged [10-3, 10] Hz.
In the higher frequency regime, results of the RIM introduce less accuracy.
4. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 4, August 2019: 1941-1947
1944
Figure 1. Time responses Figure 2. Approximation error of time responses
LA1(ω) - exact magnitude response;
LA1A(ω) - magnitude response by RIM method
Figure 3. Magnitude responses
(a) (b)
LArg1(ω): Phase response of real model; LArg1A(ω): approximation model by RIM;
∆Arg1(ω): error of phase response by RIM
Figure 4. Phase responses and error of the phase response
The second part of examples we considered the transfer function of a distributed
parameter system, having the form as (12) [12]:
𝐺2(𝑠) =
sinh(
𝑧0√𝑠
𝑣
)
sinh(
𝑙√𝑠
𝑣
)
1−
1
2
𝑒−𝑧0 𝑠+
sinh(
𝑧0√𝑠
𝑣
)
sinh(
𝑙√𝑠
𝑣
)
(12)
The parameters of the model 𝐺2(𝑠) in the formula are given (12), for
ℓ = 1, 𝑧0 = ℓ/2, 𝜗 = 1. From this transfer function is impossible to find the exact time
5. TELKOMNIKA ISSN: 1693-6930
Real interpolation method for transfer function approximation of distributed... (Phu Tran Tin)
1945
response. For this cause we will find the approximation model and estimate the fitness of
the model in the frequency and time domains.
According to the model (11): Lim
𝛿→0
𝐺2(𝛿) = 0.5 and lim
𝛿→∞
𝐺2(𝛿) = 0. we have chosen
approximation model (3) with 𝑏0 = 0.5, 𝑎0 = 1 and the order of denominator of the transfer
function must be higher than the order of numerator. Simply, the orders of the approximated
rational transfer function being considered are 𝑚 = 6 in numerator, and 𝑛 = 7 in denominator.
It means that number of unknown coefficients is 𝑁 = 𝑛 + 𝑚 = 13. In the RIM we choose values
of the nodes 𝛿𝑖 in range [0.001 ÷ 15] with N=13 nodes equally spaced. The results of
approximation process will be considered in the time and frequency domains. The time
response of the linearization model is presented in Figure 5. Magnitude responses and error of
the magnitude responses as shown in Figure 6.
The Bode plots of the approximation models are shown in Figures 7 and 8.
The approximation model presents high accuracy to first resonant. In the higher frequency
range, the approximation is less accurate. The diagrams show that it leads to the same above
conclusion, the fitness of RIM model in Bode domain is high accuracy in the frequency range
[0.001-10] Hz. However, in the higher frequency range [10-100] RIM model is inaccurate.
To estimate the RIM for the approximation of distributed parameter systems, we carry
out numerical examples with different systems. In the examples, there was conducted
an in-depth analysis of the results of the RIM in the time domain and the frequency domain.
The above results show that the accuracy of the RIM in the time domain is significantly in
the first example. In the frequency domain as the Bode characteristics, the RIM models show
highly fitting in the low-frequency range [0.001-10] Hz. However, in the higher considered
frequency range, the RIM is less accurate. Generally, the characteristic of RIM model in
the Bode diagrams fit the exact model in the wide range comparing to the other methods.
Figure 5. Time responses of the approximation model
(a) (b)
LA2(ω): exact magnitude response; LA2A(ω): magnitude response by RIM method;
∆L2A(ω): magnitude response error by RIM method
Figure 6. Magnitude responses and error of the magnitude responses
6. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 4, August 2019: 1941-1947
1946
(a) (b)
Arg2(ω): exact phase response; Arg2A(ω) phase response by RIM method;
∆Arg1R(ω): phase response error by RIM method.
Figure 7. Phase responses and error of the phase responses
4. Conclusion
In this paper, a new approximation method for distributed parameter system is
presented. The most significant feature of the proposed method is its computational efficiency.
The computation is carried out in the real domain. This method is straightforward both
conceptually and computationally. The obtained results from the previous examples are entirely
satisfactory. The main drawback of the proposed method is that it is uncertain of
the approximation model in the frequency domain.
References
[1] Curtain R, Morris K. Transfer functions of distributed parameter systems: A tutorial. Automatica.
2009; 45(5): 1101–1116. doi:10.1016/j.automatica.2009.01.008.
[2] Belavý C, Hulkó G, Ondrejkovič K, Kubiš M, Bartalský L. Robust control of temperature field in
continuous casting process as distributed parameter systems. 2015 20th International Conference on
Process Control (PC). Strbske Pleso. 2015: 125-130. doi:10.1109/pc.2015.7169949.
[3] Coca D, Billings SA. Identification of Finite Dimensional Models of Infinite Dimensional Dynamical
Systems. Automatica. 2002; 38(11): 1851-1865. Doi: 10.1016/s0005-1098(02)00099-7.
[4] Dung NQ, Minh TH. A Innovation Identification Approach of Control System by Irrational (Fractional)
Transfer. Indonesian Journal of Electrical Engineering and Computer Science. 2016; 3(2): 336-342.
doi:10.11591/ijeecs.v3.i2.pp336-342.
[5] Moallem A, Yazdani D, Bakhshai A, Jain P. Frequency domain identification of the utility grid
parameters for distributed power generation systems. 2011 Twenty-Sixth Annual IEEE Applied Power
Electronics Conference and Exposition (APEC). 2011: 965-969. doi:10.1109/apec.2011.5744711.
[6] Volosencu C. Identification of Distributed Parameter Systems Based on Sensor Networks. In: Er
Meng Joo. Editor. New Trends in Technologies: Control, Management, Computational Intelligence
and Network Systems. 2010: 369-394. doi:10.5772/10407.
[7] Dong X, Wang C. Identification of a class of hyperbolic distributed parameter systems via
deterministic learning. 2012 Third International Conference on Intelligent Control and Information
Processing. 2012: 97-102. doi: 10.1109/icicip.2012.6391543.
[8] Bartecki K. Approximation of a class of distributed parameter systems using proper orthogonal
decomposition. 2011 16th International Conference on Methods & Models in Automation & Robotics.
2011: 351-356. doi: 10.1109/mmar.2011.6031372.
[9] Bartecki K. PCA-based approximation of a class of distributed parameter systems: classical vs.
neural network approach. Bulletin of the Polish Academy of Sciences: Technical Sciences. 2012;
60(3): 651-60. doi: 10.2478/v10175-012-0077-7.
[10] Vidyasagar M, Anderson BD. Approximation and stabilization of distributed systems by lumped
systems. Systems & control letters. 1989; 12(2): 95-101. doi: 10.1016/0167-6911(89)90001-7.
[11] Di Loreto M, Damak S, Eberard D, Brun X. Approximation of linear distributed parameter systems by
delay systems. Automatica. 2016; 68: 162-8. doi:10.1016/j.automatica.2016.01.065.
[12] Jiang M, Deng H. Low-dimensional approximation strategy for a class of nonlinear distributed
parameter systems. 2011 Second International Conference on Mechanic Automation and Control
Engineering. 2011: 385-388. doi:10.1109/mace.2011.5986940.
7. TELKOMNIKA ISSN: 1693-6930
Real interpolation method for transfer function approximation of distributed... (Phu Tran Tin)
1947
[13] Park JH. Transfer function approximation via rationalized Haar transform in frequency domain.
International Journal of Control and Automation. 2014; 7(4): 247-58. doi:10.14257/ijca.2014.7.4.21.
[14] Belforte G, Dabbene F, Gay P. LPV approximation of distributed parameter systems in environmental
modelling. Environmental Modelling & Software. 2005; 20(8): 1063-1070. doi:
10.1016/j.envsoft.2004.09.015.
[15] Rashid T, Kumar S, Verma A, Gautam PR, Kumar A. Pm-EEMRP: Postural Movement Based Energy
Efficient Multi-hop Routing Protocol for Intra Wireless Body Sensor Network (Intra-WBSN).
TELKOMNIKA Telecommunication Computing Electronics and Control. 2018; 16(1): 166-73.
doi:10.12928/telkomnika.v16i1.7318.
[16] Morabito AF. Power Synthesis of Mask-Constrained Shaped Beams Through Maximally-Sparse
Planar Arrays. TELKOMNIKA Telecommunication Computing Electronics and Control. 2016; 14(4):
1217-1219.
[17] Tin PT, Minh TH, Trang TT, Dung NQ. Using Real Interpolation Method for Adaptive Identification of
Nonlinear Inverted Pendulum System. International Journal of Electrical and Computer Engineering
(IJECE). 2019; 9(2): 1078-1089. DOI: 10.11591/ijece.v9i2.pp.1078-1089.
[18] Šekara TB, Rapaić MR, Lazarević MP. An Efficient Method for Approximation of Non Rational
Transfer Functions. Electronics. 2013; 17(1): 40-44. Doi:10.7251/els1317040s.
[19] Thakar U, Joshi V, Mehta U, Vyawahare VA. Fractional-Order PI Controller for Permanent Magnet
Synchronous Motor: A Design-Based Comparative Study. In: Azar AT, et al. Editors. Fractional Order
Systems. Academic Press. 2018: 553-78. doi:10.1016/b978-0-12-816152-4.00018-2.
[20] Bergh J, Löfström J. The Real Interpolation Method. In: Grundlehren Der Mathematischen
Wissenschaften Interpolation Spaces. 1976: 38-86. doi: 10.1007/978-3-642-66451-9_3.
[21] Antoulas AC. Approximation of large-scale dynamical systems. Philadelphia: Siam. 2005.
doi:10.1137/1.9780898718713.
[22] He W, Liu J. Active Vibration Control and Stability Analysis of Flexible Beam Systems. Singapore:
Springer. 2019. doi: 10.1007/978-981-10-7539-1.
[23] Bruneau M. Introduction to Non-Linear Acoustics, Acoustics in Uniform Flow, and Aero-Acoustics. In:
Bruneau M. Editor. Fundamentals of Acoustics. 2006: 511-75. doi: 10.1002/9780470612439.ch10.
[24] Curtain RF, Zwart H. Introduction. In: Texts in Applied Mathematics an Introduction to Infinite-
Dimensional Linear Systems Theory. 1995: 1-12. doi: 10.1007/978-1-4612-4224-6_1.
[25] Pelczynski A. Bases and the Approximation Property in Some Spaces of Analytic Functions. In:
Pelczynski A. Editor. Banach Spaces of Analytic Functions and Absolutely Summing Operators
CBMS Regional Conference Series in Mathematics. 1977: 65-70. doi:10.1090/cbms/030/12.
[26] Nguyen DQ. An effective approach of approximation of fractional order system using real
interpolation method. Journal of Advanced Engineering and Computation. 2017; 1(1): 39-47. doi:
10.25073/jaec.201711.48.