The document discusses necessary conditions for minimal system configurations of typical Takagi-Sugeno (TS) fuzzy systems and Mamdani fuzzy systems as universal approximators. It establishes that for TS fuzzy systems using trapezoidal input fuzzy sets and linear rule consequents, the number of input fuzzy sets and rules needed depends on the number and locations of extrema of the function to be approximated. Specifically, functions with a few extrema may require only a handful of rules, while periodic or oscillatory functions would require many rules. It then compares these conditions to those previously established for Mamdani fuzzy systems, finding their minimal configurations are comparable. Finally, it proves the conditions can be reduced for TS systems using non-tra
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm which sort the elements based on their average, which runs faster.. This algorithm was analyzed, implemented and tested and the results are promising for a random data
BINARY TREE SORT IS MORE ROBUST THAN QUICK SORT IN AVERAGE CASEIJCSEA Journal
Average case complexity, in order to be a useful and reliable measure, has to be robust. The probability distribution, generally uniform, over which expectation is taken should be realistic over the problem domain. But algorithm books do not certify that the uniform inputs are always realistic. Do the results hold even for non uniform inputs? In this context we observe that Binary Tree sort is more robust than the fastand popular Quick sort in the average case.
Parametric Sensitivity Analysis of a Mathematical Model of Two Interacting Po...IOSR Journals
Experts in the mathematical modeling for two interacting technologies have observed the different contributions between the intraspecific and the interspecific coefficients in conjunction with the starting population sizes and the trading period. In this complex multi-parameter system of competing technologies which evolve over time, we have used the numerical method of mathematical norms to measure the sensitivity values of the intraspecific coefficients b and e, the starting population sizes of the two interacting technologies and the duration of trading. We have observed that the two intraspecific coefficients can be considered as most sensitive parameter while the starting populations are called least sensitive. We will expect these contributions to provide useful insights in the determination of the important parameters which drive the dynamics of the technological substitution model in the context of one-at-a-timesensitivity analysis
Credal Fusion of Classifications for Noisy and Uncertain DataIJECEIAES
This paper reports on an investigation in classification technique employed to classify noised and uncertain data. However, classification is not an easy task. It is a significant challenge to discover knowledge from uncertain data. In fact, we can find many problems. More time we don’t have a good or a big learning database for supervised classification. Also, when training data contains noise or missing values, classification accuracy will be affected dramatically. So to extract groups from data is not easy to do. They are overlapped and not very separated from each other. Another problem which can be cited here is the uncertainty due to measuring devices. Consequentially classification model is not so robust and strong to classify new objects. In this work, we present a novel classification algorithm to cover these problems. We materialize our main idea by using belief function theory to do combination between classification and clustering. This theory treats very well imprecision and uncertainty linked to classification. Experimental results show that our approach has ability to significantly improve the quality of classification of generic database.
Algorithmic Dynamics of Cellular AutomataHector Zenil
Original presentation prepared for the opening keynote of the meeting in celebration of Prof. Harold McIntosh. This talk covers aspects of the complexity and behaviour of cellular automata and their emergent dynamic patterns and information dynamics of events such as particle collisions.
Initial Optimal Parameters of Artificial Neural Network and Support Vector Re...IJECEIAES
This paper presents architecture of backpropagation Artificial Neural Network (ANN) and Support Vector Regression (SVR) models in supervised learning process for cement demand dataset. This study aims to identify the effectiveness of each parameter of mean square error (MSE) indicators for time series dataset. The study varies different random sample in each demand parameter in the network of ANN and support vector function as well. The variations of percent datasets from activation function, learning rate of sigmoid and purelin, hidden layer, neurons, and training function should be applied for ANN. Furthermore, SVR is varied in kernel function, lost function and insensitivity to obtain the best result from its simulation. The best results of this study for ANN activation function is Sigmoid. The amount of data input is 100% or 96 of data, 150 learning rates, one hidden layer, trinlm training function, 15 neurons and 3 total layers. The best results for SVR are six variables that run in optimal condition, kernel function is linear, loss function is ౬ -insensitive, and insensitivity was 1. The better results for both methods are six variables. The contribution of this study is to obtain the optimal parameters for specific variables of ANN and SVR.
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm which sort the elements based on their average, which runs faster.. This algorithm was analyzed, implemented and tested and the results are promising for a random data
BINARY TREE SORT IS MORE ROBUST THAN QUICK SORT IN AVERAGE CASEIJCSEA Journal
Average case complexity, in order to be a useful and reliable measure, has to be robust. The probability distribution, generally uniform, over which expectation is taken should be realistic over the problem domain. But algorithm books do not certify that the uniform inputs are always realistic. Do the results hold even for non uniform inputs? In this context we observe that Binary Tree sort is more robust than the fastand popular Quick sort in the average case.
Parametric Sensitivity Analysis of a Mathematical Model of Two Interacting Po...IOSR Journals
Experts in the mathematical modeling for two interacting technologies have observed the different contributions between the intraspecific and the interspecific coefficients in conjunction with the starting population sizes and the trading period. In this complex multi-parameter system of competing technologies which evolve over time, we have used the numerical method of mathematical norms to measure the sensitivity values of the intraspecific coefficients b and e, the starting population sizes of the two interacting technologies and the duration of trading. We have observed that the two intraspecific coefficients can be considered as most sensitive parameter while the starting populations are called least sensitive. We will expect these contributions to provide useful insights in the determination of the important parameters which drive the dynamics of the technological substitution model in the context of one-at-a-timesensitivity analysis
Credal Fusion of Classifications for Noisy and Uncertain DataIJECEIAES
This paper reports on an investigation in classification technique employed to classify noised and uncertain data. However, classification is not an easy task. It is a significant challenge to discover knowledge from uncertain data. In fact, we can find many problems. More time we don’t have a good or a big learning database for supervised classification. Also, when training data contains noise or missing values, classification accuracy will be affected dramatically. So to extract groups from data is not easy to do. They are overlapped and not very separated from each other. Another problem which can be cited here is the uncertainty due to measuring devices. Consequentially classification model is not so robust and strong to classify new objects. In this work, we present a novel classification algorithm to cover these problems. We materialize our main idea by using belief function theory to do combination between classification and clustering. This theory treats very well imprecision and uncertainty linked to classification. Experimental results show that our approach has ability to significantly improve the quality of classification of generic database.
Algorithmic Dynamics of Cellular AutomataHector Zenil
Original presentation prepared for the opening keynote of the meeting in celebration of Prof. Harold McIntosh. This talk covers aspects of the complexity and behaviour of cellular automata and their emergent dynamic patterns and information dynamics of events such as particle collisions.
Initial Optimal Parameters of Artificial Neural Network and Support Vector Re...IJECEIAES
This paper presents architecture of backpropagation Artificial Neural Network (ANN) and Support Vector Regression (SVR) models in supervised learning process for cement demand dataset. This study aims to identify the effectiveness of each parameter of mean square error (MSE) indicators for time series dataset. The study varies different random sample in each demand parameter in the network of ANN and support vector function as well. The variations of percent datasets from activation function, learning rate of sigmoid and purelin, hidden layer, neurons, and training function should be applied for ANN. Furthermore, SVR is varied in kernel function, lost function and insensitivity to obtain the best result from its simulation. The best results of this study for ANN activation function is Sigmoid. The amount of data input is 100% or 96 of data, 150 learning rates, one hidden layer, trinlm training function, 15 neurons and 3 total layers. The best results for SVR are six variables that run in optimal condition, kernel function is linear, loss function is ౬ -insensitive, and insensitivity was 1. The better results for both methods are six variables. The contribution of this study is to obtain the optimal parameters for specific variables of ANN and SVR.
A Bayesian approach to estimate probabilities in classification treesNTNU
Classification or decision trees are one of the most effective methods for supervised clas- sification. In this work, we present a Bayesian approach to induce classification trees based on a Bayesian score splitting criterion and a new Bayesian method to estimate the probability of class membership based on Bayesian model averaging over the rules of the previously induced tree. In an experimental evaluation, we show as our approach reaches the performance of Quinlan’s C4.5, one of the most known decision tree inducers, in terms of predictive accuracy and clearly outperforms it in terms of better probability class estimates.
Financial Time Series Analysis Based On Normalized Mutual Information FunctionsIJCI JOURNAL
A method of predictability analysis of future values of financial time series is described. The method is based on normalized mutual information functions. In the analysis, the use of these functions allowed to refuse any restrictions on the distributions of the parameters and on the correlations between parameters. A comparative analysis of the predictability of financial time series of Tel Aviv 25 stock exchange has been carried out.
A Study on Youth Violence and Aggression using DEMATEL with FCM Methodsijdmtaiir
The DEMATEL method is then a good technique for
making decisions. In this paper we analyzed the risk factors of
youth violence and what makes them more aggressive. Since
there are more risk factors of youth violence, to relate each
other more complex to construct FCM and analyze them.
Moreover the data is an unsupervised one obtained from
survey as well as interviews. Hence fuzzy alone has the
capacity to analyses these concepts.
Discretization methods for Bayesian networks in the case of the earthquakejournalBEEI
The Bayesian networks are a graphical probability model that represents interactions between variables. This model has been widely applied in various fields, including in the case of disaster. In applying field data, we often find a mixture of variable types, which is a combination of continuous variables and discrete variables. For data processing using hybrid and continuous Bayesian networks, all continuous variables must be normally distributed. If normal conditions unsatisfied, we offer a solution, is to discretize continuous variables. Next, we can continue the process with the discrete Bayesian networks. The discretization of a variable can be done in various ways, including equal-width, equal-frequency, and K-means. The combination of BN and k-means is a new contribution in this study called the k-means Bayesian networks (KMBN) model. In this study, we compared the three methods of discretization used a confusion matrix. Based on the earthquake damage data, the K-means clustering method produced the highest level of accuracy. This result indicates that K-means is the best method for discretizing the data that we use in this study.
Modeling Crude Oil Prices (CPO) using General Regression Neural Network (GRNN) AI Publications
Modeling time series is often associated with the process forecasts certain characteristics in the next period. One of the methods forecasts that developed nowadays is using artificial neural network or more popularly known as aneural network. Use neural network in forecasts time series can be agood solution, but the problem is network architecture and the training method in the right direction. General Regression Neural Network (GRNN) is one of the network model radial basis that used to approach a function. GRNN including model neural network model with a solution that quickly, because it is not needed each iteration in the estimation weight. This model has a network architecture that wasa number of units in pattern layer in accordance with the number of input data. One of the application GRNN is to predict the crude oil by using a model GRNN.From the training and testing on the data obtained by the RMSE testing 1.9355 and RMSE training 1.1048.Model is good to be used to give aprediction that is quite accurate information that is shown by the close target with the output
A New Method for Preserving Privacy in Data Publishingcscpconf
Protection of individuals’ privacy is a vital activity in data publishing. Government and public
sector websites publish enormous amount of data for sharing the data among their departments
and also to public for research. Sensitive information of individuals, whose data are published
must be protected. Privacy is challenged through two kinds of attack namely attribute
disclosure and identity disclosure. Early Research contributions were made in this direction and
new methods namely k-anonymity, ℓ-diversity, t-closeness are evolved. K-anonymity method
preserves the privacy against identity disclosure attack alone. It fails to address attribute
disclosure attack. ℓ-diversity method overcomes the drawback of k-anonymity method. But it
fails to address identity disclosure attack and attribute disclosure attack in some exceptional
cases. t-closeness method is good at attribute disclosure attack. but not identity disclosure
attack. Also, t-closeness method is more complex than other methods. In this paper, the authors
propose a new method to preserve the privacy of individuals’ sensitive data from attribute and
identity disclosure attacks. In the proposed method, privacy preservation is achieved through
generalization of quasi identifier by setting range values.The proposed method is implemented
and tested with various data sets. The proposed method is found to preserve the privacy of
published data against attribute and identity disclosure attacks.
Have you ever created a machine learning model that is perfect for the training samples but gives very bad predictions with unseen samples! Did you ever think why this happens? This article explains overfitting which is one of the reasons for poor predictions for unseen samples. Also, regularization technique based on regression is presented by simple steps to make it clear how to avoid overfitting.
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...csandit
Recognizing Faces helps to name the various subjects present in the image. This work focuses
on labeling faces on an image which includes faces of humans being of various age group
(heterogeneous set ). Principal component analysis concentrates on finds the mean of the data
set and subtracts the mean value from the data set with an intention to normalize that data.
Normalization with respect to image is the removal of common features from the data set. This
work brings in the novel idea of deploying the median another measure of central tendency for
normalization rather than mean. The above work was implemented using matlab. Results show
that Median is the best measure for normalization for a heterogeneous data set which gives
raise to outliers.
Multi objective predictive control a solution using metaheuristicsijcsit
The application of multi objective model predictive control approaches is significantly limited with
computation time associated with optimization algorithms. Metaheuristics are general purpose heuristics
that have been successfully used in solving difficult optimization problems in a reasonable computation
time. In this work , we use and compare two multi objective metaheuristics, Multi-Objective Particle
swarm Optimization, MOPSO, and Multi-Objective Gravitational Search Algorithm, MOGSA, to generate
a set of approximately Pareto-optimal solutions in a single run. Two examples are studied, a nonlinear
system consisting of two mobile robots tracking trajectories and avoiding obstacles and a linear multi
variable system. The computation times and the quality of the solution in terms of the smoothness of the
control signals and precision of tracking show that MOPSO can be an alternative for real time
applications.
Calculation of the Minimum Computational Complexity Based on Information Entropyijcsa
In order to find out the limiting speed of solving a specific problem using computer, this essay provides a method based on information entropy. The relationship between the minimum computational complexity and information entropy change is illustrated. A few examples are served as evidence of such connection. Meanwhile some basic rules of modeling problems are established. Finally, the nature of solving problems with computer programs is disclosed to support this theory and a redefinition of information entropy in this filed is proposed. This will develop a new field of science.
PARTITION SORT REVISITED: RECONFIRMING THE ROBUSTNESS IN AVERAGE CASE AND MUC...IJCSEA Journal
In our previous work there was some indication that Partition Sort could be having a more robust average case O(nlogn) complexity than the popular Quick sort. In our first study in this paper, we reconfirm this through computer experiments for inputs from Cauchy distribution for which expectation theoretically does not exist. Additionally, the algorithm is found to be sensitive to parameters of the input probability distribution demanding further investigation on parameterized complexity. The results on this algorithm for Binomial inputs in our second study are very encouraging in that direction.
A Thresholding Method to Estimate Quantities of Each ClassWaqas Tariq
Thresholding method is a general tool for classification of a population. Various thresholding methods have been proposed by many researchers. However, there are some cases in which existing methods are not appropriate for a population analysis. For example, this is the case when the objective of analysis is to select a threshold to estimate the total number of data (pixels) of each classified population. In particular, If there is a significant difference between the total numbers and/or variances of two populations, error possibilities in classification differ excessively from each other. Consequently, estimated quantities of each classified population could be very different from the actual one. In this report, a new method which could be applied to select a threshold to estimate quantities of classes more precisely in the above mentioned case is proposed. Then verification of features and ranges of application of the proposed method by sample data analysis is presented.
Comparision of methods for combination of multiple classifiers that predict b...IJERA Editor
Predictive analysis include techniques fromdata mining that analyze current and historical data and make
predictions about the future. Predictive analytics is used in actuarial science, financial services, retail, travel,
healthcare, insurance, pharmaceuticals, marketing, telecommunications and other fields.Predicting patterns can
be considered as a classification problem and combining the different classifiers gives better results. We will
study and compare three methods used to combine multiple classifiers. Bayesian networks perform
classification based on conditional probability. It is ineffective and easy to interpret as it assumes that the
predictors are independent. Tree augmented naïve Bayes (TAN) constructs a maximum weighted spanning tree
that maximizes the likelihood of the training data, to perform classification.This tree structure eliminates the
independent attribute assumption of naïve Bayesian networks. Behavior-knowledge space method works in two
phases and can provide very good performances if large and representative data sets are available.
CONSISTENT AND LUMPED MASS MATRICES IN DYNAMICS AND THEIR IMPACT ON FINITE EL...IAEME Publication
There are two strategies in the finite element analysis of dynamic problems related to natural frequency determination viz. the consistent / coupled mass matrix and the lumped mass matrix. Correct determination of natural frequencies is extremely important and forms the basis of any further NVH (Noise vibration and harshness) calculations and Impact or crash analysis. It has been thought by the finite element community that the consistent mass matrix should not be used as it leads to a higher computational cost and this opinion has been prevalent since 1970. We are of the opinion that in today’s age where computers have become so fast we can use the consistent mass matrix on relatively coarse meshes with an advantage for better accuracy rather than going for finer meshes and lumped mass matrix
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
THE ACTIVE CONTROLLER DESIGN FOR ACHIEVING GENERALIZED PROJECTIVE SYNCHRONIZA...ijait
This paper discusses the design of active controllers for achieving generalized projective synchronization (GPS) of identical hyperchaotic Lü systems (Chen, Lu, Lü and Yu, 2006), identical hyperchaotic Cai systems (Wang and Cai, 2009) and non-identical hyperchaotic Lü and hyperchaotic Cai systems. The synchronization results (GPS) for the hyperchaotic systems have been derived using active control method and established using Lyapunov stability theory. Since the Lyapunov exponents are not required for these calculations, the active control method is very effective and convenient for achieving the GPS of the
hyperchaotic systems addressed in this paper. Numerical simulations are provided to illustrate the effectiveness of the GPS synchronization results derived in this paper.
A Bayesian approach to estimate probabilities in classification treesNTNU
Classification or decision trees are one of the most effective methods for supervised clas- sification. In this work, we present a Bayesian approach to induce classification trees based on a Bayesian score splitting criterion and a new Bayesian method to estimate the probability of class membership based on Bayesian model averaging over the rules of the previously induced tree. In an experimental evaluation, we show as our approach reaches the performance of Quinlan’s C4.5, one of the most known decision tree inducers, in terms of predictive accuracy and clearly outperforms it in terms of better probability class estimates.
Financial Time Series Analysis Based On Normalized Mutual Information FunctionsIJCI JOURNAL
A method of predictability analysis of future values of financial time series is described. The method is based on normalized mutual information functions. In the analysis, the use of these functions allowed to refuse any restrictions on the distributions of the parameters and on the correlations between parameters. A comparative analysis of the predictability of financial time series of Tel Aviv 25 stock exchange has been carried out.
A Study on Youth Violence and Aggression using DEMATEL with FCM Methodsijdmtaiir
The DEMATEL method is then a good technique for
making decisions. In this paper we analyzed the risk factors of
youth violence and what makes them more aggressive. Since
there are more risk factors of youth violence, to relate each
other more complex to construct FCM and analyze them.
Moreover the data is an unsupervised one obtained from
survey as well as interviews. Hence fuzzy alone has the
capacity to analyses these concepts.
Discretization methods for Bayesian networks in the case of the earthquakejournalBEEI
The Bayesian networks are a graphical probability model that represents interactions between variables. This model has been widely applied in various fields, including in the case of disaster. In applying field data, we often find a mixture of variable types, which is a combination of continuous variables and discrete variables. For data processing using hybrid and continuous Bayesian networks, all continuous variables must be normally distributed. If normal conditions unsatisfied, we offer a solution, is to discretize continuous variables. Next, we can continue the process with the discrete Bayesian networks. The discretization of a variable can be done in various ways, including equal-width, equal-frequency, and K-means. The combination of BN and k-means is a new contribution in this study called the k-means Bayesian networks (KMBN) model. In this study, we compared the three methods of discretization used a confusion matrix. Based on the earthquake damage data, the K-means clustering method produced the highest level of accuracy. This result indicates that K-means is the best method for discretizing the data that we use in this study.
Modeling Crude Oil Prices (CPO) using General Regression Neural Network (GRNN) AI Publications
Modeling time series is often associated with the process forecasts certain characteristics in the next period. One of the methods forecasts that developed nowadays is using artificial neural network or more popularly known as aneural network. Use neural network in forecasts time series can be agood solution, but the problem is network architecture and the training method in the right direction. General Regression Neural Network (GRNN) is one of the network model radial basis that used to approach a function. GRNN including model neural network model with a solution that quickly, because it is not needed each iteration in the estimation weight. This model has a network architecture that wasa number of units in pattern layer in accordance with the number of input data. One of the application GRNN is to predict the crude oil by using a model GRNN.From the training and testing on the data obtained by the RMSE testing 1.9355 and RMSE training 1.1048.Model is good to be used to give aprediction that is quite accurate information that is shown by the close target with the output
A New Method for Preserving Privacy in Data Publishingcscpconf
Protection of individuals’ privacy is a vital activity in data publishing. Government and public
sector websites publish enormous amount of data for sharing the data among their departments
and also to public for research. Sensitive information of individuals, whose data are published
must be protected. Privacy is challenged through two kinds of attack namely attribute
disclosure and identity disclosure. Early Research contributions were made in this direction and
new methods namely k-anonymity, ℓ-diversity, t-closeness are evolved. K-anonymity method
preserves the privacy against identity disclosure attack alone. It fails to address attribute
disclosure attack. ℓ-diversity method overcomes the drawback of k-anonymity method. But it
fails to address identity disclosure attack and attribute disclosure attack in some exceptional
cases. t-closeness method is good at attribute disclosure attack. but not identity disclosure
attack. Also, t-closeness method is more complex than other methods. In this paper, the authors
propose a new method to preserve the privacy of individuals’ sensitive data from attribute and
identity disclosure attacks. In the proposed method, privacy preservation is achieved through
generalization of quasi identifier by setting range values.The proposed method is implemented
and tested with various data sets. The proposed method is found to preserve the privacy of
published data against attribute and identity disclosure attacks.
Have you ever created a machine learning model that is perfect for the training samples but gives very bad predictions with unseen samples! Did you ever think why this happens? This article explains overfitting which is one of the reasons for poor predictions for unseen samples. Also, regularization technique based on regression is presented by simple steps to make it clear how to avoid overfitting.
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...csandit
Recognizing Faces helps to name the various subjects present in the image. This work focuses
on labeling faces on an image which includes faces of humans being of various age group
(heterogeneous set ). Principal component analysis concentrates on finds the mean of the data
set and subtracts the mean value from the data set with an intention to normalize that data.
Normalization with respect to image is the removal of common features from the data set. This
work brings in the novel idea of deploying the median another measure of central tendency for
normalization rather than mean. The above work was implemented using matlab. Results show
that Median is the best measure for normalization for a heterogeneous data set which gives
raise to outliers.
Multi objective predictive control a solution using metaheuristicsijcsit
The application of multi objective model predictive control approaches is significantly limited with
computation time associated with optimization algorithms. Metaheuristics are general purpose heuristics
that have been successfully used in solving difficult optimization problems in a reasonable computation
time. In this work , we use and compare two multi objective metaheuristics, Multi-Objective Particle
swarm Optimization, MOPSO, and Multi-Objective Gravitational Search Algorithm, MOGSA, to generate
a set of approximately Pareto-optimal solutions in a single run. Two examples are studied, a nonlinear
system consisting of two mobile robots tracking trajectories and avoiding obstacles and a linear multi
variable system. The computation times and the quality of the solution in terms of the smoothness of the
control signals and precision of tracking show that MOPSO can be an alternative for real time
applications.
Calculation of the Minimum Computational Complexity Based on Information Entropyijcsa
In order to find out the limiting speed of solving a specific problem using computer, this essay provides a method based on information entropy. The relationship between the minimum computational complexity and information entropy change is illustrated. A few examples are served as evidence of such connection. Meanwhile some basic rules of modeling problems are established. Finally, the nature of solving problems with computer programs is disclosed to support this theory and a redefinition of information entropy in this filed is proposed. This will develop a new field of science.
PARTITION SORT REVISITED: RECONFIRMING THE ROBUSTNESS IN AVERAGE CASE AND MUC...IJCSEA Journal
In our previous work there was some indication that Partition Sort could be having a more robust average case O(nlogn) complexity than the popular Quick sort. In our first study in this paper, we reconfirm this through computer experiments for inputs from Cauchy distribution for which expectation theoretically does not exist. Additionally, the algorithm is found to be sensitive to parameters of the input probability distribution demanding further investigation on parameterized complexity. The results on this algorithm for Binomial inputs in our second study are very encouraging in that direction.
A Thresholding Method to Estimate Quantities of Each ClassWaqas Tariq
Thresholding method is a general tool for classification of a population. Various thresholding methods have been proposed by many researchers. However, there are some cases in which existing methods are not appropriate for a population analysis. For example, this is the case when the objective of analysis is to select a threshold to estimate the total number of data (pixels) of each classified population. In particular, If there is a significant difference between the total numbers and/or variances of two populations, error possibilities in classification differ excessively from each other. Consequently, estimated quantities of each classified population could be very different from the actual one. In this report, a new method which could be applied to select a threshold to estimate quantities of classes more precisely in the above mentioned case is proposed. Then verification of features and ranges of application of the proposed method by sample data analysis is presented.
Comparision of methods for combination of multiple classifiers that predict b...IJERA Editor
Predictive analysis include techniques fromdata mining that analyze current and historical data and make
predictions about the future. Predictive analytics is used in actuarial science, financial services, retail, travel,
healthcare, insurance, pharmaceuticals, marketing, telecommunications and other fields.Predicting patterns can
be considered as a classification problem and combining the different classifiers gives better results. We will
study and compare three methods used to combine multiple classifiers. Bayesian networks perform
classification based on conditional probability. It is ineffective and easy to interpret as it assumes that the
predictors are independent. Tree augmented naïve Bayes (TAN) constructs a maximum weighted spanning tree
that maximizes the likelihood of the training data, to perform classification.This tree structure eliminates the
independent attribute assumption of naïve Bayesian networks. Behavior-knowledge space method works in two
phases and can provide very good performances if large and representative data sets are available.
CONSISTENT AND LUMPED MASS MATRICES IN DYNAMICS AND THEIR IMPACT ON FINITE EL...IAEME Publication
There are two strategies in the finite element analysis of dynamic problems related to natural frequency determination viz. the consistent / coupled mass matrix and the lumped mass matrix. Correct determination of natural frequencies is extremely important and forms the basis of any further NVH (Noise vibration and harshness) calculations and Impact or crash analysis. It has been thought by the finite element community that the consistent mass matrix should not be used as it leads to a higher computational cost and this opinion has been prevalent since 1970. We are of the opinion that in today’s age where computers have become so fast we can use the consistent mass matrix on relatively coarse meshes with an advantage for better accuracy rather than going for finer meshes and lumped mass matrix
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
THE ACTIVE CONTROLLER DESIGN FOR ACHIEVING GENERALIZED PROJECTIVE SYNCHRONIZA...ijait
This paper discusses the design of active controllers for achieving generalized projective synchronization (GPS) of identical hyperchaotic Lü systems (Chen, Lu, Lü and Yu, 2006), identical hyperchaotic Cai systems (Wang and Cai, 2009) and non-identical hyperchaotic Lü and hyperchaotic Cai systems. The synchronization results (GPS) for the hyperchaotic systems have been derived using active control method and established using Lyapunov stability theory. Since the Lyapunov exponents are not required for these calculations, the active control method is very effective and convenient for achieving the GPS of the
hyperchaotic systems addressed in this paper. Numerical simulations are provided to illustrate the effectiveness of the GPS synchronization results derived in this paper.
Foundation and Synchronization of the Dynamic Output Dual Systemsijtsrd
In this paper, the synchronization problem of the dynamic output dual systems is firstly introduced and investigated. Based on the time domain approach, the state variables synchronization of such dual systems can be verified. Meanwhile, the guaranteed exponential convergence rate can be accurately estimated. Finally, some numerical simulations are provided to illustrate the feasibility and effectiveness of the obtained result. Yeong-Jeu Sun "Foundation and Synchronization of the Dynamic Output Dual Systems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29256.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29256/foundation-and-synchronization-of-the-dynamic-output-dual-systems/yeong-jeu-sun
Dynamic Evolving Neuro-Fuzzy Inference System for Mortality Prediction IJERA Editor
In this paper we propose a dynamic evolving neuro-fuzzy inference system (DENFIS) to forecast mortality. DENFIS is an adaptive intelligent system suitable for dynamic time series prediction. An Evolving Cluster Method (ECM) drives the learning process. The typical fuzzy rules of the neuro- fuzzy systems are updated during the learning process and adjusted according to the features of the data. This makes possible to capture the changes in the mortality evolution at the basis of the so called longevity risk
Robust Exponential Stabilization for a Class of Uncertain Systems via a Singl...ijtsrd
In this paper, the robust stabilization for a class of uncertain chaotic or non chaotic systems with single input is investigated. Based on Lyapunov like Theorem with differential and integral inequalities, a simple linear control is developed to realize the global exponential stabilization of such uncertain systems. In addition, the guaranteed exponential convergence rate can be correctly estimated. Finally, some numerical simulations with circuit realization are provided to show the effectiveness of the obtained result. Yeong-Jeu Sun "Robust Exponential Stabilization for a Class of Uncertain Systems via a Single Input Control and its Circuit Implementation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29322.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29322/robust-exponential-stabilization-for-a-class-of-uncertain-systems-via-a-single-input-control-and-its-circuit-implementation/yeong-jeu-sun
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several
algorithms have been introduced to extract these rules. However, these algorithms suffer from
the problems of utility, redundancy and large number of extracted fuzzy association rules. The
expert will then be confronted with this huge amount of fuzzy association rules. The task of
validation becomes fastidious. In order to solve these problems, we propose a new validation
method. Our method is based on three steps. (i) We extract a generic base of non redundant
fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis.
(ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules
using structural equation model.
Determining costs of construction errors, based on fuzzy logic systems ipcmc2...Mohammad Lemar ZALMAİ
In construction projects, construction errors affect negatively to the production, that influences the overall of the project in both time and budget. Generally, construction companies could not estimate this kind of errors during the bidding process. In this case, these companies did not consider important issues on the budget of the contract, and in the contracting period, project participants assumed that the project would be executed as it scheduled and designed. During the project, different construction processes’ costs are higher than estimated values due to construction errors.
The errors that were recognized during the construction process cause time and financial losses, on the other hand, the errors that were noticed after the project’s termination cause repair and correction costs. Moreover, the company may gain a bad reputation in the sector.
The key points of this study are to analyze project costs by considering construction errors and re-construction costs due to labor errors by using fuzzy interpretation mechanism. This methodology is applied to a residential construction project. With using of this methodology, forthcoming extra costs related to construction errors can be estimated. And some precautions can be taken for further legal conflicts between parties.
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...IJECEIAES
In this work, a Neuro-Fuzzy Controller network, called NFC that implements a Mamdani fuzzy inference system is proposed. This network includes neurons able to perform fundamental fuzzy operations. Connections between neurons are weighted through binary and real weights. Then a mixed binaryreal Non dominated Sorting Genetic Algorithm II (NSGA II) is used to perform both accuracy and interpretability of the NFC by minimizing two objective functions; one objective relates to the number of rules, for compactness, while the second is the mean square error, for accuracy. In order to preserve interpretability of fuzzy rules during the optimization process, some constraints are imposed. The approach is tested on two control examples: a single input single output (SISO) system and a multivariable (MIMO) system.
Design of State Estimator for a Class of Generalized Chaotic Systemsijtsrd
In this paper, a class of generalized chaotic systems is considered and the state observation problem of such a system is investigated. Based on the time domain approach with differential inequality, a simple state estimator for such generalized chaotic systems is developed to guarantee the global exponential stability of the resulting error system. Besides, the guaranteed exponential decay rate can be correctly estimated. Finally, several numerical simulations are given to show the effectiveness of the obtained result. Yeong-Jeu Sun "Design of State Estimator for a Class of Generalized Chaotic Systems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29270.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29270/design-of-state-estimator-for-a-class-of-generalized-chaotic-systems/yeong-jeu-sun
Draft comparison of electronic reliability prediction methodologiesAccendo Reliability
A draft version of the paper that was eventually published as “J.A.Jones & J.A.Hayes, ”A comparison of electronic-reliability prediction models”, IEEE Transactions on reliability, June 1999, Volume 48, Number 2, pp 127-134”
Provide with the kind permission of the author, J.A.Jones
The tensor language provides a unifying approach that simplifies notation, which leads to compact modeling of multi-way information objects in many knowledge fields, and a thought framework as well. By such a language, it is modeled a generic system that connects to environment through its boundaries.
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
Due to the very high nonlinearity of the power system, traditional analytical methods take a lot of time to solve, causing delay in decision-making. Therefore, quickly detecting power system instability helps the control system to make timely decisions become the key factor to ensure stable operation of the power system. Power system stability identification encounters large data set size problem. The need is to select representative variables as input variables for the identifier. This paper proposes to apply wrapper method to select variables. In which, Binary Particle Swarm Optimization (BPSO) algorithm combines with K-NN (K=1) identifier to search for good set of variables. It is named BPSO&1-NN. Test results on IEEE 39-bus diagram show that the proposed method achieves the goal of reducing variables with high accuracy.
A Method for the Reduction 0f Linear High Order MIMO Systems Using Interlacin...IJMTST Journal
This paper presents a new mixed method for the reduction of linear high order MIMO system. This method
is based upon the interlacing property by which the denominator polynomial of the reduced order model is
obtained and the numerator is obtained by using factor division method. In general, the stability of the high
order system is retained in their models. Better approximation of the time response characteristics is
attained by using this suggested method. The number of computations has been reduced when compared to
several of the existing methods are in international literature. Another advantage of this method is that it is a
direct method. The suggested procedure is digital computer oriented.
The philosophy of fuzzy logic was formed by introducing the membership degree of a linguistic value or variable instead of divalent membership of 0 or 1. Membership degree is obtained by mapping the variable on the graphical shape of fuzzy numbers. Because of simplicity and convenience, triangular membership numbers (TFN) are widely used in different kinds of fuzzy analysis problems. This paper suggests a simple method using statistical data and frequency chart for constructing non-isosceles TFN when we are using direct rating for evaluating a variable in a predefined scale. In this method, the relevancy between assessment uncertainties and statistical parameters such as mean value and the standard deviation is established in a way that presents an exclusive form of triangle number for each set of data. The proposed method with regard to the graphical shape of the frequency chart distributes the standard deviation around the mean value and forms the TFN with the membership degree of 1 for mean value. In the last section of the paper modification of the proposed method is presented through a practical case study.
GLOBAL CHAOS SYNCHRONIZATION OF HYPERCHAOTIC QI AND HYPERCHAOTIC JHA SYSTEMS ...ijistjournal
This paper derives new results for the global chaos synchronization of identical hyperchaotic Qi systems (2008), identical hyperchaotic Jha systems (2007) and non-identical hyperchaotic Qi and Jha systems. Active nonlinear control is the method adopted to achieve the complete synchronization of the identical and different hyperchaotic Qi and Jha systems. Our stability results derived in this paper are established using Lyapunov stability theory. Numerical simulations are shown to validate and illustrate the effectiveness of the synchronization results derived in this paper.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
2. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 5, SEPTEMBER 1999 509
Fig. 1. Graphic description of the membership functions of the input fuzzy sets that are mathematically defined in (1).
to investigate necessary conditions for the Mamdani fuzzy systems as
universal approximators with minimal system configuration [4], [13],
which exposed the strength as well as limitation of the Mamdani
fuzzy systems as approximators. On one hand, only a small number
of fuzzy rules may be needed to uniformly approximate multivariate
continuous functions that have complicated formulation but a small
number of extrema. On the other hand, however, the number of
fuzzy rules must be large in order to approximate periodic or highly
oscillatory continuous functions.
In this paper, we extend our effort to study TS fuzzy systems on
the same aspect. Specifically, we investigate the following two issues
that are of theoretical and practical importance.
1) What are the necessary conditions under which typical TS fuzzy
systems can possibly be universal approximators but with as
minimal system configuration as possible?
2) Given any continuous function, which type of fuzzy systems,
TS or Mamdani, is more economical as approximators in that
less design parameters are needed?
II. CONFIGURATION OF TYPICAL TS FUZZY SYSTEMS
The fuzzy systems under this investigation are the typical ones that
use input variables x1 and x2, where xi 2 [ai, bi] and i = 1, 2. The
interval [ai, bi] is divided into Ni subintervals
ai = Ci
0 < Ci
1 < Ci
2 < 111 < Ci
N 01 < Ci
N = bi:
On [ai, bi], Ni +1 trapezoidal input fuzzy sets, each denoted as Ai
j
(0 ji Ni), are defined to fuzzify xi. Ai
j has a membership
function, designated as i
j (xi), whose mathematical definition is as
follows:
i
j (xi) =
0; xi 2 Ci
0; Ci
j 01 +i
j 01
'i
j xi +i
j ; xi 2 Ci
j 01 +i
j 01; Ci
j 0i
j
1; xi 2 Ci
j 0i
j ; Ci
j +i
j
8i
j xi +2i
j ; xi 2 Ci
j +i
j ; Ci
j +1 0i
j +1
0; xi 2 Ci
j +1 0i
j +1; Ci
N
(1)
where
'i
j = 1
Ci
j 0i
j 0 Ci
j 01 +i
j 01
;
i
j = 0 Ci
j 01 +i
j 01
Ci
j 0i
j 0 Ci
j 01 +i
j 01
;
8i
j = 0 1
Ci
j +1 0i
j +1 0 Ci
j +i
j
;
2i
j = Ci
j +1 0i
j +1
Ci
j +1 0i
j +1 0 Ci
j +i
j
:
To better understand the definition, we graphically illustrate it in
Fig. 1. The membership functions have the following two properties:
1) the trapezoids can be different in upper and lower bases as
well as left and right sides;
2) for two neighboring membership functions, say the jith and
ji + 1th, i
j + i
j +1 = 1.
Obviously, the triangular membership functions are just special cases
of the trapezoidal ones when i
j = 0 and i
j = 0. In this paper,
we call each combination of [C1j , C1j +1] 2 [C2j , C2j +1] a cell on
[a1, b1] 2 [a2, b2].
The TS fuzzy systems use arbitrary fuzzy rules with linear rule
consequent
IF x1 is A1
h AND x2 is A2
h THEN F(x1; x2)
= h ; h x1 +
4. h ; h , and
h ; h can be any constants chosen by
the system developer and F(x1; x2) designates output of the fuzzy
systems. Product fuzzy logic AND is employed to yield combined
membership 1
h 2
h for the rule consequent. Using the popular
centroid defuzzifier and noting i
j + i
j +1 = 1, we obtain
F(x1; x2) =
1
h 2
h (h ; h x1 +
5. h ; h x2 +
h ; h )
1
h
2
h
= 1
h 2
h (h ; h x1 +
6. h ; h x2 +
h ; h ):
We point out that the configuration of the TS fuzzy systems
described above is typical and commonly used in fuzzy control and
modeling. Moreover, we have proved in our previous paper that these
TS fuzzy systems are universal approximators and have also derived
a formula for computing the needed number of input fuzzy sets and
rules based on the function to be approximated as well as prespecified
approximation accuracy [12].
III. NECESSARY CONDITIONS ON MINIMAL SYSTEM CONFIGURATION
FOR THE TYPICAL TS FUZZY SYSTEMS AS UNIVERSAL APPROXIMATORS
In this section, we will establish necessary conditions on minimal
system configuration requirement for the typical TS fuzzy systems
as function approximators. We assume the following information is
available:
1) an arbitrarily small approximation error bound 0;
2) the continuous function to be approximated, designated as
f(x1; x2), has K distinctive extrema on (a1, b1)2(a2, b2).
These two assumptions are minimum and very nonrestrictive, and
can indeed be obtained in practice if the function to be approximated
is readily measurable.
7. 510 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 5, SEPTEMBER 1999
For a more clear and concise presentation of the mathematical proof
of the conditions, we first need to establish the following lemmas.
Lemma 1: F (x1; x2) is continuous on [a1, b1] 2 [a2, b2] if and
only if the following two conditions are met:
1) Four different fuzzy rules in the form of (2) are assigned to
each of the N1 2 N2 combinations of subintervals.
2) (N1 + 1)(N2 + 1) fuzzy rules are used for the N1 2 N2
combinations of subintervals.
Proof: The proof is similar to that we gave to the general
Mamdani fuzzy systems [4], [13]. Basically, for each cell [C
1
j ,
C
1
j +1] 2 [C
2
j , C
2
j +1], two nonzero memberships are resulted for
x1 and another two for x2 after fuzzification. Hence, there are four
different combinations of the four memberships, leading to activation
of four fuzzy rules. Four rules must be used in order to gain continuity
of F (x1; x2) on [C
1
j , C
1
j +1] 2 [C
2
j , C
2
j +1].
Furthermore, there exist a total of (N1 + 1)(N2 + 1) different
membership combinations, resulting in the need of the same number
of fuzzy rules if continuity of F (x1; x2) on [a1, b1] 2 [a2, b2] is
wanted.
Lemma 2: The following third-order function
P (x1; x2) = a + bx1 + cx2 + dx1x2 + ex
2
1 + f x
2
2 + gx
2
1x2 + hx1x
2
2;
where a, b, c, d, e, f , g, and h can be any real constants and x1,
x2 2 (01; 1), has at most one extremum.
Proof: We assume that (x
3
1; x
3
2) is one extreme point of
P (x1; x2) and prove that there exists at most one extreme point on
entire (01; 1)2(01; 1). We first shift the origin of the x1 0x2
coordinate system from (0, 0) to (x
3
1; x
3
2) by letting x1 = x1 + x
3
1,
x2 = x2 + x
3
2, resulting in a new third-order function
G(x1; x2)
= P (x1 + x
3
1; x2 + x
3
2)
= a + b(x1 + x
3
1) + c(x2 + x
3
2) + d(x1 + x
3
1)(x2 + x
3
2)
+ e(x1 + x
3
1)
2
+ f (x2 + x
3
2)
2
+ g(x1 + x
3
1)
2
(x2 + x
3
2)
+ h(x1 + x
3
1)(x2 + x
3
2)
2
= a + b x1 + c x2 + d x1x2 + e x
2
1 + f x
2
2 + g x
2
1x2 + h x1x
2
2
where a, b, c, d, e, f , g, and h are any constants (they are computed
from a, b, c, d, e, f , g, h, x
3
1, and x
3
2). We now look for all possible
extreme points of G(x1; x2) by doing the following:
@G
@x1
= b + d x2 + 2e x1 + 2g x1x2 + h x
2
2; (3)
@G
@x2
= c + d x1 + 2f x2 + g x
2
1 + 2h x1x2; (4)
@
2
G
@x1 @x2
=
@
2
G
@x2 @x1
= d + 2g x1 + 2h x2;
@
2
G
@x
2
1
= 2e + 2g x2;
@
2
G
@x
2
2
= 2f + 2h x1;
D =
@
2
G
@x
2
1
@
2
G
@x1 @x2
@
2
G
@x2 @x1
@
2
G
@x
2
2
= 4(e + g x2)(f + h x1) 0(d + 2g x1 + 2h x2)
2
: (5)
The sufficient condition for (x1; x2) = (0; 0) [equivalently, (x1,
x2) = (x
3
1, x
3
2)) to be an extreme point is
D = 4e f 0d
2
0; (6)
which means if we properly choose the values of e, f , and d so that
the condition D 0 is satisfied, then our assumption of (0, 0) being
an extreme point indeed holds.
Now we show that except (0, 0), there exist no other extreme
points. Because (0, 0) can be an extreme point, we have
@G
@x1 (0; 0)
= 0 and
@G
@x2 (0; 0)
= 0
and hence b = 0 in (3) and c = 0 in (4). As a result, any other possible
extreme points must be the solutions of the following equation set:
d x2 + 2e x1 + 2g x1x2 + h x
2
2 = 0
d x1 + 2f x2 + g x
2
1 + 2h x1x2 = 0
which can be written either as
2e x1 + (d + 2g x1 + h x2)x2 = 0
(d + g x1 + 2h x2)x1 + 2f x2 = 0
(7)
or
(2e + 2g x2)x1 + (d + h x2)x2 = 0
(d + g x1)x1 + (2f + 2h x1)x2 = 0:
(8)
The necessary and sufficient conditions for (7) and (8) to have
nonzero solutions are, respectively,
2e d + h x2 + 2g x1
d + g x1 + 2h x2 2f
= 0 (9)
and
2e + 2g x2 d + h x2
d + g x1 2f + 2h x1
= 0: (10)
After some simple derivation, we obtain from (9)
(d + 2h x2 + 2g x1)
2
= (g x1 + h x2)(d + 2h x2 + 2g x1) + 4e f 0g hx1x2 (11)
and from (10), we obtain
4(e + g x2)(f + h x1) = (d + g x1)(d + h x2): (12)
Replacing the two terms in (5) by (11) and (12), respectively, we gain
D = (d + g x1)(d + h x2) 0(g x1 + h x2)(d + 2h x2 + 2g x1)
04e f + g h x1x2
= 02 (h x2)
2
+ (g x1)
2
0h x2g x1 0(4e f 0d
2
) 0:
In the last step, we used the well-known inequality
(h x2)
2
+ (g x1)
2
0h x2g x1 0
and inequality (6).
The fact that D 0 on entire (01, 1) 2(01, 1), excluding
(0, 0), means there does not exist any other extreme points except (0,
0). This completes our proof.
Lemma 3: The following second-order functions
P (x1; x2) = a + bx1 + cx2 + dx1x2 + ex
2
1
Q(x1; x2) = a + bx1 + cx2 + dx1x2 + ex
2
2
where a–e can be any real constants, are monotonic on entire (01,
1) 2 (01, 1).
Proof: The proof is straightforward as the following derivations
show:
@P
@x1
= b + dx2 + 2ex1;
@P
@x2
= c + dx1;
@
2
P
@x
2
1
= 2e;
@
2
P
@x1 @x2
=
@
2
P
@x2 @x1
= d;
@
2
P
@x
2
2
= 0;
D =
@
2
P
@x
2
1
@
2
P
@x1 @x2
@
2
P
@x2 @x1
@
2
P
@x
2
2
= 0d
2
:
8. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 5, SEPTEMBER 1999 511
Fig. 2. Division of the input space into nine regions for proving that the
typical TS fuzzy systems have at most one extremum in the whole input
space.
P(x1; x2) does not have any extreme on entire (01, 1) 2(01,
1) because D = 0d2 0. This conclusion obviously also holds
for Q(x1; x2).
Having established these three lemmas, we are now ready to prove
the following main results.
Theorem 1: When Lemma 1 holds, the TS fuzzy systems have
at most one extremum in each of the N1 2 N2 combinations of
subintervals.
Proof: Without losing generality, assume x1 2 [C1j , C1j +1]
and x2 2 [C2j , C2j +1]. After fuzzification, only two nonzero
memberships are resulted for each input variable and they are
1
j and 1
j +1 for x1
2
j and 2
j +1 for x2:
Consequently, four rules relating to these memberships are activated.
To investigate how many extrema exist, we need to divide [C1j ,
C1j +1]2[C2j , C2j +1] into nine regions, as shown in Fig. 2. In region
S5, the output of the TS fuzzy systems is a third-order function
F(x1; x2) = 1
h 2
h (h ; h x1 +
9. h ; h x2 +
h ; h )
= p0 + p1x1 + p2x2 + p3x1x2 + p4x2
1 + p5x2
2
+ p6x2
1x2 + p7x1x2
2 (13)
where coefficients p0–p7 are constants whose values are determined
by the membership functions of the input fuzzy sets as well as by
the parameters in the fuzzy rule consequent. In regions S4 and S6
F(x1; x2) = p0 + p1x1 + p2x2 + p3x1x2 + p4x2
1 (14)
whereas in regions S2 and S8
F(x1; x2) = p0 + p1x1 + p2x2 + p3x1x2 + p4x2
2; (15)
both of which are second-order functions. Finally, in regions S1, S3,
S7, and S9, the fuzzy systems output is in the form of plane (i.e.,
first-order function):
F(x1; x2) = p0 + p1x1 + p2x2: (16)
In different regions, the values of coefficients p0–p7 in (13)–(16)
are different. As example, we provide the explicit expressions of the
coefficients for regions S1, S2, and S5 in the Appendix.
According to Lemma 2, F(x1; x2) has at most one extremum in
region S5. Due to Lemma 3, F(x1; x2) is monotonic in regions S2,
S4, S6, and S8. Being planes, F(x1; x2) is also monotonic in regions
S1, S3, S7, and S9. Note that F(x1; x2) is continuous on [C1j ,
C1j +1] 2[C2j , C2j +1], when Lemma 1 holds. Therefore, F(x1; x2)
has at most one extremum on [C1j , C1j +1] 2[C2j , C2j +1].
Recall that in the beginning of this section, we assumed the
continuous function to be approximated, f(x1; x2), has K distinctive
extrema at (xj
1; xj
2), where j = 1; 2; 111; K, on (a1, b1) 2
(a2, b2). We now prove, using Lemmas 1–3 and Theorem 1, the
necessary conditions for the TS fuzzy systems as universal function
approximators with minimal system configuration.
Theorem 2: To approximate f(x1; x2) with arbitrarily small error
bound, one must choose such N1 and N2 that, respectively, divide
[a1; b1] and [a2; b2] in such a way that at most one extremum exists
in each cell [C1j , C1j +1] 2 [C2j , C2j +1] for the typical TS fuzzy
systems. Accordingly, the minimal number of fuzzy rules needed is
(N1 + 1)(N2 + 1), with 3(N1 + 1)(N2 + 1) parameters in the rule
consequent.
Proof: In order to approximate f(x1; x2) arbitrarily well, one
must first approximate all the extrema arbitrarily well, which means
that the output of the TS fuzzy systems must reach the extrema at
(xj
1; xj
2) for all j. According to Theorem 1, the TS fuzzy systems
have at most one extremum in each cell, regardless of the size of
the cell. Hence, one must divide [a1; b1] and [a2; b2] in such a way
that at most one extremum of f(x1; x2) exists in each cell [C1j ,
C1j +1] 2[C2j , C2j +1], j1 = 1; 2; 111; N1, and j2 = 1; 2; 111; N2.
Based on Lemma 1, the TS fuzzy systems need (N1 + 1)(N2 + 1)
fuzzy rules. Since there are three parameters in each rule consequent
[see (2)], total 3(N1 + 1)(N2 + 1) parameters are required in all the
rule consequent.
In many cases, additional rules are needed to approximate whole
f(x1; x2), not just the extrema, as accurately as desired.
Theorem 2 shows that, as universal approximators, the TS fuzzy
systems have similar strength and limitation possessed by the general
Mamdani fuzzy systems that we studied before [4], [13]. On one
hand, it is possible for the TS fuzzy systems to use only a handful
of fuzzy rules to uniformly and accurately approximate functions
that are complicated but only have a few extrema. This explains
why the majority of successful TS fuzzy controllers and models in
the literature need to employ only a small number of fuzzy rules
to achieve satisfactory results. On the other hand, a large amount
of fuzzy rules is required for approximating simple functions with
many extrema. The number of fuzzy rules needed increases with the
increase of the number of extrema of f(x1; x2). This means that the
fuzzy systems are not ideal function approximators for periodic or
highly-oscillatory functions.
So far in present paper, we have studied minimal system configu-
ration of the TS fuzzy systems as function approximators purely from
mathematical standpoint. There have already existed many different
function approximators, such as polynomial and spline functions,
in traditional function approximation theory. As always, each type
of approximators has its advantage and limitation. The distinctive
advantage of fuzzy approximators over other approximators lies in
their unique ability to utilizing not only numerical data but also
linguistically-expressed human knowledge and experience.
IV. MINIMAL SYSTEM CONFIGURATION COMPARISON BETWEEN THE
TS AND MAMDANI FUZZY SYSTEMS AS UNIVERSAL APPROXIMATORS
In our previous papers [4], [13], we established the necessary
conditions on minimal system configuration for the general MISO
Mamdani fuzzy systems as universal approximators. These Mamdani
fuzzy systems employ almost arbitrary continuous input fuzzy sets,
arbitrary singleton output fuzzy sets, arbitrary fuzzy rules, product
fuzzy logic AND and the generalized defuzzifier containing the
10. 512 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 5, SEPTEMBER 1999
(a) (b)
Fig. 3. Comparison of the minimal system configurations of the typical TS fuzzy systems and the general Mamdani fuzzy systems. The example function
to be approximated has two maximum points, whose locations are marked by symbol , and two minimum points whose locations are marked by symbol
. (a) gives one possible division of the input space for the TS fuzzy systems to be minimal whereas (b) provides the necessary input space division
for the Mamdani fuzzy systems to be minimal.
(a) (b)
Fig. 4. Comparison of the minimal system configurations of the typical TS fuzzy systems and the general Mamdani fuzzy systems using another example
function. The meanings of the symbols are the same as those in Fig. 3. This example function has the same number of extrema but the locations of the
minimum points are slightly different from those displayed in Fig. 3. (a) gives one possible division of the input space for the TS fuzzy systems to be minimal
whereas (b) provides the necessary input space division for the Mamdani fuzzy systems to be minimal.
centroid defuzzifier as a special case. The conditions are virtually
the same as those established in present paper. In this section,
we will compare the necessary conditions developed above for the
TS fuzzy systems with those we previously established for the
general Mamdani fuzzy systems. The purpose of the comparison is to
determine whether one type of the fuzzy systems is more economical
than the other types. We will use the following two simple yet
representative examples to show our points and reach our conclusions
for the comparison.
In the first example, the function to be approximated has two
maximum points whose locations are marked by symbol , and two
minimum points whose locations are marked by symbol , as shown
in Fig. 3(a) [we use the same symbols in Figs. 3(b) and 4(a) and
(b)]. According to Theorem 2, N1 = N2 = 2 and we give, as shown
in Fig. 3(a), one possible way to divide both [a1; b1] and [a2; b2]
into two intervals. Correspondingly, at least nine fuzzy rules with 27
rule consequent parameters are needed by the TS fuzzy systems. The
system developer will have to determine 27 parameters. However,
for the same function, we must divide both [a1; b1] and [a2; b2]
into three intervals, as shown in Fig. 3(b), according to Theorem
2 in our previous paper [4] (also see [13]). Hence, only 16 fuzzy
rules are required by the Mamdani fuzzy systems. This means only
16 parameters, each is a singleton output fuzzy sets, need to be
determined by the developer. Obviously, the TS fuzzy systems are
less economical than the Mamdani fuzzy systems because of the
larger number of design parameters.
Fig. 4(a) shows our second example function to be approximated
that also has two maximum points and two minimum points. The
locations of the two minimum points are slightly different from those
in Fig. 3. In this case, the division of [a1; b1] and [a2; b2] can be the
same as that in Fig. 3(a) and the minimal configuration requirement
for the TS fuzzy systems remains the same, that is, 27 parameters.
Nevertheless, the optimal division of [a1; b1] and [a2; b2] for the
Mamdani fuzzy systems now must be that shown in Fig. 4(b) where
N1 = N2 = 5. The corresponding number of fuzzy rules is 36.
Hence, the minimal system configuration of the TS fuzzy systems is
more economical.
Through these two examples, one sees that the minimal system
configuration of the TS and Mamdani fuzzy systems depends on how
many extrema the function to be approximated has and where they
are. For some functions, the TS fuzzy systems are more economical
whereas for others the Mamdani fuzzy systems are smaller in the
number of design parameters. For all the functions as whole, these
two types of fuzzy systems are comparably economical and no one
is better or worse than the other.
In the minimal system configuration comparison thus far, we
have limited the input fuzzy sets of the TS fuzzy systems to
trapezoidal/triangular types, as defined in Section II. Would the com-
parison outcome be different if nontrapezoidal/nontriangular input
fuzzy sets are used? Our answer is yes. In what follows, we show
that as far as minimal system configuration is concerned, it is advan-
tageous for the TS fuzzy systems to use nontrapezoidal/nontriangular
11. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 5, SEPTEMBER 1999 513
input fuzzy sets. This is because they can make the TS fuzzy systems
have more than one extremum in each cell, subsequently reducing the
number of fuzzy rules needed. A general mathematical proof of this
new finding is difficult because there exist countless different types
of nontrapezoidal/nontriangular fuzzy sets and explicitly describing
all of them is impossible. Alternatively, we rigorously prove our
finding using some typical single-input single-output (SISO) TS fuzzy
systems. For notional consistence, we will use x1 only along with all
the other associated notations created in Section II to describe the
configuration of these SISO TS fuzzy systems.
For the SISO fuzzy systems involved, membership functions of
the input fuzzy sets are defined as
1
j (x1) =
0; x1 2 C10; C1j 01 +1j 01
I1j (x1); x1 2 C1j 01 +1j 01; C1j 01j
1; x1 2 C1j 01j ; C1j +1j
D1j (x1); x1 2 C1j +1j ; C1j +1 01j +1
0; x1 2 C1j +1 01j +1; C1
N
where I1j (x1) is a monotonically increasing function whereas
D1j (x1) is a monotonically decreasing function. Their values are
within [0, 1]. This definition is the same as that in (1) except the
two linear functions in (1) are replaced by I1j (x1) and D1j (x1). We
specifically choose 1j = 1j = 0 for j1 = 1; 2; 111; N1 and let
I1
j (x1) =10 x1 0C1j 01
C1j 0C1j 01
2
and
D1
j (x1) = x1 0C1j
C1j +1 0C1j
2
:
After defuzzification, the output of the typical SISO fuzzy systems
on [C1j , C1j +1] is
F(x1) = 1j (x1)(1j x1 +
1j )+1j +1(x1)(1j +1x1 +
1j +1)
1j (x1)+1j +1(x1)
(17)
and we let the rule consequent parameters be: 1j = 1
4,
1j = 0,
1j +1 = 5
4, and
1j +1 = 01. Without losing generality, we suppose
C1j = 0 and C1j +1 = 1. Using the specific definition of 1j (x1)
and the rule consequent parameters in (17), we obtain
F(x1) = 1
4x1 0x2
1 +x3
1; x1 2 [0; 1]:
It is easy to prove that F(x1) reaches maximum at x1 = 1
6 and
minimum at x1 = 1
2. This means there are two extrema in [0,
1]. These typical SISO TS fuzzy systems can have more than one
extremum in a cell because the membership functions are no long
limited to trapezoidal or triangular shapes. When multiple extrema
exist in some cells, the number of subintervals on [a1; b1] can
be small even when the number of extrema of the function to be
approximated is large. Hence, the SISO TS fuzzy systems can be
more economic in minimal system configuration than the general
SISO Mamdani fuzzy systems because the output of latter is always
monotonic in a cell, regardless of the shape of the membership
functions [4], [13].
The conclusions drawn from the above analysis of the SISO
TS and Mamdani fuzzy systems also hold for the TS and the
general Mamdani fuzzy systems with two input variables. We now
conclude this section by summarizing all the above comparison results
regarding minimal system configurations in the form of following
theorem:
Theorem 3: The minimal configurations of the typical TS fuzzy
systems and the general Mamdani fuzzy systems depend on the
number and location of the extrema of the function to be approx-
imated. When trapezoidal/triangular input fuzzy sets are used, the
TS and Mamdani fuzzy systems are comparable in minimal system
configuration. Use of nontrapezoidal/nontriangular input fuzzy sets
can minimize configuration for the typical TS fuzzy systems, resulting
in smaller configuration as compared to the general Mamdani fuzzy
systems.
V. CONCLUSIONS
We have established necessary conditions for the typical TS
fuzzy systems as function approximators with as small a system
configuration as possible. We have proved that the number of input
fuzzy sets used by the TS fuzzy systems depend on the number
and locations of extrema of the function to be approximated. We
have compared these conditions with the ones that we previously
established for the general Mamdani fuzzy systems. Results of the
comparison reveal that, when trapezoidal or triangular input fuzzy sets
are used, the typical TS fuzzy systems and the general Mamdani fuzzy
systems have comparable minimal system configuration. Furthermore,
we have found that the TS fuzzy systems can be more economical
in the number of input fuzzy sets and fuzzy rules than the general
Mamdani fuzzy systems if nontrapezoidal/nontriangular input fuzzy
sets are used. Our new findings are valuable in designing more
compact fuzzy systems, such as fuzzy controllers and models which
are two most popular and successful applications of the fuzzy
approximators.
We believe that all the results in present paper hold for the TS
fuzzy systems with more than two input variables. A rigorous proof
seems to be mathematically challenging and is an interesting and
valuable research topic.
APPENDIX
To carry out the proof in Theorem 1, we need the explicit expres-
sions of coefficients p0–p7 for all the nine regions shown in Fig. 2.
For brevity, we give here, without showing the detail derivations, the
expressions for coefficients p0–p7 when input variables of the TS
fuzzy systems are in regions S1, S2, and S5.
For region S1, x1 2 [C1j , C1j +1j ], and x2 2 [C2j , C2j +2j ],
output of the TS fuzzy systems is
F(x1; x2) = p0 +p1x1 +p2x2;
where
p0 =
j ; j ; p1 = j ; j ; and p2 =
12. j ; j :
For region S2, x1 2 [C1j , C1j + 1j ] and x2 2 [C2j + 2j ,
C2j +1 0 2j +1], output of the TS fuzzy systems is
F(x1; x2) = p0 +p1x1 +p2x2 +p3x1x2 +p4x2
2;
where
p0 =22
j
j ; j +22
j +1
j ; j +1;
p1 =22
j j ; j +22
j +1j ; j +1;
p2 =82
j
j ; j +22
j